-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathreport.tex
More file actions
3548 lines (2812 loc) · 312 KB
/
report.tex
File metadata and controls
3548 lines (2812 loc) · 312 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\documentclass[11pt,a4paper,titlepage,oneside]{report}
\usepackage[english]{babel}
\usepackage[utf8]{inputenc} % input encoding is UTF-8
\usepackage[table,xcdraw]{xcolor}
\usepackage{graphicx}
\usepackage{tabularx}
\usepackage{subcaption}
\usepackage{color}
\usepackage[unicode,pdftex]{hyperref}
\usepackage{xcolor}
\usepackage{longtable}
\usepackage{listings}
\usepackage{amsmath}
\usepackage{pdfpages}
\usepackage[acronym,numberedsection=autolabel]{glossaries}
\usepackage{listings}
\usepackage{color}
\definecolor{dkgreen}{rgb}{0,0.6,0}
\definecolor{gray}{rgb}{0.5,0.5,0.5}
\definecolor{mauve}{rgb}{0.58,0,0.82}
\lstset{frame=tb,
language=Bash,
aboveskip=3mm,
belowskip=3mm,
showstringspaces=false,
columns=flexible,
basicstyle={\small\ttfamily},
numbers=none,
numberstyle=\tiny\color{gray},
keywordstyle=\color{blue},
commentstyle=\color{dkgreen},
stringstyle=\color{mauve},
breaklines=true,
breakatwhitespace=true,
tabsize=3
}
\makeglossaries
\newacronym{netcdf}{NetCDF}{NETwork Common Data Form}
\newacronym{NTNU}{NTNU}{the Norwegian University of Science and Technology}
\newacronym{QA}{QA}{Quality Assurance}
\newacronym{L}{L}{Low}
\newacronym{M}{M}{Medium}
\newacronym{H}{H}{High}
\newacronym{HTML}{HTML}{HyperText Markup Language}
\newacronym{HTML5}{HTML5}{HyperText Markup Language version 5}
\newacronym{CSS}{CSS}{Cascading Style Sheets}
\newacronym{EDGE}{EDGE}{Enhanced Data rates for GSM Evolution}
\newacronym{PNG}{PNG}{Portable Network Graphics}
\newacronym{MEOPAR}{MEOPAR}{Marine Environmental Observation Prediction and Response Network}
\newacronym{WMS}{WMS}{Web Map Service}
\newacronym{THREDDS}{THREDDS}{Thematic Realtime Environmental Distributed Data Services}
\newacronym{OPENDAP}{OPeNDAP}{Open-source Project for a Network Data Access Protocol}
\newacronym{OGC}{OGC}{Open Geospatial Consortium}
\newacronym{WCS}{WCS}{Web Coverage Service}
\newacronym{HTTP}{HTTP}{HyperText Transfer Protocol}
\newacronym{NCML}{NcML}{\gls{netcdf} Markup Language}
\newacronym{XML}{XML}{Extensible Markup Language}
\newacronym{TDS}{TDS}{\gls{THREDDS} Data Server}
\newacronym{SSD}{SSD}{Solid State Drive}
\newacronym{NCWMS}{ncWMS}{NNN}
\newacronym{BSD}{BSD}{Berkeley Software Distribution}
\newacronym{JPEG}{JPEG}{Joint Photographic Experts Group}
\newacronym{GIF}{GIF}{Graphics Interchange Format}
\newacronym{CPU}{CPU}{Central Processing Unit}
\newacronym{TMS}{TMS}{Tile Map Service}
\newacronym{URL}{URL}{Uniform Resource Locator}
\newacronym{WMTS}{WMTS}{Web Map Tile Service}
\newacronym{WMSC}{WMSC}{Web Map Server Tile Cache}
\newacronym{GUI}{GUI}{Graphical User Interface}
\newacronym{API}{API}{Application Programming Interface}
\newacronym{JSON}{JSON}{JavaScript Object Notation}
\newacronym{Fimex}{Fimex}{File Interpolation, Manipulation and EXtraction}
\newacronym{PSP}{PSP}{Polar Stereographic Projection}
\newacronym{SASS}{SASS}{Syntactically Awesome Style Sheets}
\newacronym{WFS}{WFS}{Web Feature Service}
\newacronym{GCS}{GCS}{Geographical Coordinate System}
\newacronym{GIS}{GIS}{Geographical Information System}
\newacronym{PPA}{PPA}{Personal Package Archive}
\newacronym{WKT}{WKT}{Well-known text}
\newacronym{PCS}{PCS}{Projected Coordinate System}
\newacronym{NCO}{NCO}{NetCDF operators for file manipulation and simple calculations}
\newacronym{NPM}{npm}{NodeJS Package Manager}
\newacronym{CRS}{CRS}{Coordinate Reference System}
\newacronym{EPSG}{EPSG}{European Petroleum Survey Group}
\newacronym{USGS}{USGS}{U.S. Geological Survey}
\newglossaryentry{pre-study}{name=pre-study, description={is a document produced before the implementation phase, which informs the customer of possible solutions to their problem}}
\newglossaryentry{pre-delivery}{name=pre-delivery, description={is a preliminary delivery of the main report, that is meant to give the external examiner an introduction to the groups work}}
\newglossaryentry{front-end}{name=front-end, description={is the name given to the part of software that the user interacts with}}
\newglossaryentry{back-end}{name=back-end, description={is the name given to the part of software the user does not interact with, and that performs a specialized task}}
\newglossaryentry{course compendium}{name=course compendium, description={is a document given to the students, which outlines the course, describes the rules and expectations for the groups work, and gives the structure and content of the final report}}
\newglossaryentry{protocol}{name=protocol, description={is a set of predefined rules of conduct and procedures. In computer science, it often describes the rules for communication between two parts}}
\newglossaryentry{open source}{name=open source, description={is software where the source code is available to everyone, and anyone can alter and share the code}}
\newglossaryentry{prototype}{name=prototype, description={is an early version of something, meant to test specific features}}
\newglossaryentry{proof of concept}{name=proof of concept, description={is a system that proves the feasibility of an idea. It is usually not usable as part of a final system, but rather built as proof of an idea}}
\begin{document}
% Title page %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{titlepage}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{img/logo_NTNU.png}\\
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{img/logo_SINTEF.jpg}
\end{subfigure}
\end{figure}
\begin{center}
{\LARGE \textbf{TDT4290 - Customer Driven Project}}
\vfill
{\Huge \textbf{Ocean forecast}}
\vspace{12pt}
{\LARGE \textbf{SINTEF}}
\vspace{30pt}
{\LARGE \textbf{Final report}}
\vfill
{\LARGE \textbf{Autumn 2014}}
\end{center}
\vfill
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} l l}
\textbf{Group 6} & \textbf{Advisor} \\
Arve Nygård & Gleb Sizov \\
Anders Smedegaard Pedersen & \\
Emil Jakobus Schroeder & \\
Hans Kristian Henriksen & \\
Marco Radavelli & \\
Ondrej Hujnak & \\
Ruben Håskjold Fagerli & \\
\end{tabular*}
\end{titlepage}
% Empty page %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
\thispagestyle{empty}
\mbox{}
\newpage
% Abstract %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{abstract}
In the choice of location, as well as in the daily operations of fish farms, both historical data and up to date projections on sea currents are vital. SINTEF has given the task of visualizing data on ocean currents, temperature, salinity and other variables in areas around fish farms, to be used by stakeholders in the sea farming industry. SINTEF currently has a solution for this, but they wish to improve this substantially.
Our group of seven students have used the fall semester at the Norwegian University of Science and Technology to do a full scale software development process in order to solve this task.
The group has written a \gls{pre-study}, where systems that are currently in use have been investigated, and technologies that could be used for custom built solutions have been evaluated. Using the Scrum methodology, the group has worked to develop both a front- and back-end \gls{prototype} of such a system.
The \gls{prototype} developed is able to read \gls{netcdf}-files from SINTEF's simulations, and visualize this dynamically at the users discretion. To display the data on a map for the user, the \gls{Fimex} library is used to convert the data to the correct projection. Although the solution is a \gls{prototype}, and not ready for full scale deployment, it proves that developing a custom system is within reach, and should be considered.
In addition to the prototype, the group provides SINTEF with advice on how to continue development, and what challenges need to be addressed to be able to commercialize the solution.
\end{abstract}
% Signatures %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\thispagestyle{empty}
\begin{center}
{\Huge Preface} \hfill \\
\medskip
The group would like to thank our advisor, Gleb Sizov, for his help and support during the project. He has helped us see solutions to challenges that we have faced, as well as providing answers to all our questions.
We would also like to thank our customer SINTEF, and especially their representatives Finn Olav Bjørnson, Morten Alver and Hans Bjelland, who have been with us through the entire project. Their guidance and knowledge of projections and relevant software libraries, has been invaluable for our work.
\vfill
{\large \textbf{Trondheim, November 20, 2014}}\\
\vspace{2.5cm}
\begin{tabularx}{\textwidth}{@{\extracolsep{1cm}} X X }
\dotfill & \dotfill \\
~Arve Nygård & ~Anders Smedegaard Pedersen \\[1cm]
\dotfill & \dotfill \\
~Emil Jakobus Schroeder & ~Hans Kristian Henriksen \\[1cm]
\dotfill & \dotfill \\
~Marco Radavelli & ~Ondrej Hujnak \\[1cm]
\dotfill & \\
~Ruben Håskjold Fagerli & \\[1cm]
\end{tabularx}
\end{center}
% Table of contents %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\tableofcontents
\addtocontents{toc}{\protect\thispagestyle{empty}}
% List of figures %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\listoffigures
\addtocontents{lof}{\protect\thispagestyle{empty}}
% List of tables %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\listoftables
\addtocontents{lot}{\protect\thispagestyle{empty}}
\pagenumbering{arabic}
\setcounter{page}{0}
% Main body %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Introduction}
\section{TDT 4290 - Customer driven project}
The task is set forth in the subject TDT 4290 - Customer driven project at \gls{NTNU}. The goal of the course is
\begin{quote}
(...)to give the students a practical experience of carrying out all the phases of a typical customer guided IS/IT-project. \cite{TDT4290:Intro}
\end{quote}
The subject divides the students into random groups, and assigns each group an assignment. The assignments are real problems that businesses needs solved.
Although the assignment is to follow the entire process of an IT-project, the focus is on the earlier phases of a project. Thus, an important part of the assignment is the work leading up to the implementation phase. There is obviously also an important focus on the implementation itself. Maintenance is however left out of the scope of the projects.
\section{Customer}
The customer is SINTEF Fisheries and Aquaculture (SINTEF Fiskeri og havbruk AS).
SINTEF was founded in 1950 and it is nowadays organized into 8 divisions. The division of Fisheries and Aquaculture was founded in 1999 and represents technological expertise and industry knowledge in the utilization of renewable marine resources. Under the vision "Technology for a better society" it works for a knowledge-based bio marine industry. Its goal is to meet market demands for technological research and development on renewable marine resources.
\section{Ocean currents}
The study of ocean currents allows for predicting what conditions can be expected in an area, both in short and long term. Historical data can be used to asses how an area will be exposed to temperature and salinity changes. Up to date projections are crucial for day to day operations like feeding and lice removal. It can also be used to predict the spreading of lice from one farm to another.
This makes the most relevant users of the system fish farmers, either in the planning or operation phase of the farm. The system is also relevant to anyone conducting ocean research in the given areas, and other stakeholders that can use information on the condition of the sea in their work.
\section{Assignment and scope}
\label{sec:AssignmentScope}
The client, SINTEF Fisheries and Aquaculture, has given us the project labeled "Ocean Forecast". The task is to improve SINTEF's existing system of storing and retrieving data, as well as the way the data is presented to the end user.
In agreement with the customer, the group has defined the following scope of the assignment:
\begin{quote}
Make a \gls{pre-study} that examines what technology can be used to replace the current collection of PDFs, and instead serve the end user with custom visualizations on a web page. Follow the study with an implementation of a \gls{proof of concept} system that is able to read from a collection of \gls{netcdf}-files, and present data to the end user based on custom chosen variables. We will ignore any bottlenecks that may exist in SINTEF's systems, and assume that they are fixable. Our end report will contain suggestions for further development and analysis.
\end{quote}
After the \gls{pre-study}, the group presented SINTEF with two choices for how this could be achieved. SINTEF chose a development path that required the custom development of both the \gls{front-end} and \gls{back-end}, see section \ref{subsec:DifferentPaths}.
\section{Group resources}
\label{sec:GroupResources}
The group consists of seven people with diverse skills. In the starting phase of the project, the group assessed the different skills, and used this to distribute tasks, and areas of responsibilities.
The project is set to run from August 28th until November 20th, for a total of 85 days, or 12 weeks and 1 day. As with all subjects given at \gls{NTNU}, a student is expected to work 12 hours for every 7,5 study points. For this course, this amounts to 24 hours of work every week. Due to the fact that the presentation is given the week before the end of the semester, the \gls{course compendium} sets the required work load to 25 hours a week.
For our group, this amounts to about 2100 work hours in total. Every week, the group has 6 hours of meeting time, and is required to hand in documents for the advisor meeting. An estimate of 4 hours is given for the production of these documents. Given this, approximately 1550 hours can be dedicated to solving the customers problem.
\begin{equation}
Total\:work\:hours = 25\:\frac{hours}{week} * 12\:weeks * 7\:people = \underline{\underline{2100\:hours}} \nonumber
\end{equation}
\begin{equation}
\begin{split}
Meetings\:and\:documents = (6\:hours\:of\:meetings * 12\:weeks * 7\:people) + \\
(4\:hours\:for\:documents * 12\:weeks) = \underline{\underline{552\:hours}} \nonumber
\end{split}
\end{equation}
\begin{equation}
Time\:for\:product = 2100\:hours - 552\:hours = \underline{\underline{1548\:hours}} \nonumber
\end{equation}
Group members are free to distribute the workload as they see fit, and according to their lecture schedule. The only requirement is that deadlines are met, and that the group members participate in all meetings.
\chapter{Planning}
Before starting a project of this scale, a lot of planning has to be done. How will development be done, who will be responsible for what, which risks should be planed for, and so on. This chapter will give an overview of the planning done before the project was started.
\section{Project stakeholders}
The stakeholders of this project are the following:
\begin{description}
\item[Project owner] SINTEF Fisheries and Aquaculture
\item[Supervisor] Gleb Sizov
\item[Examiner]
\item[Team members]
\end{description}
\section{Background for the project: software system development}
The customer owns a working solution, running on its own servers, for the delivery and analysis of tracked and predicted marine data. The data is stored in a special format, \gls{netcdf}, designed to be self-describing, scalable and appendable \cite{netCDF:factsheet}.
The background for this project is to optimize the current, slow and memory-consuming solution. In the project description, the customer asks the group to either develop a new \gls{back-end} system, or produce a new \gls{front-end} for the current system.
\section{Measurement of project effects}
\label{sec:MeasurementProjEff}
Some predefined criteria must be met in order to consider the project to have been successful. The product must be working according to the requirements, and pass all tests. All requirements prioritized high and medium must be fulfilled. Requirements that have low priority are viewed as nice to have, but should be implemented if there is sufficient time.
In this project, quite open goals were set by the customer, and no precisely quantifiable measurements were defined. This makes the approval of the project subjective, and the group will have to keep the customer closely involved, to ensure continuous feedback.
If working on improving the actual solution the measurements should be:
\begin{itemize}
\item The time it takes to visualize data for the end user should be "reasonably low" even with slow internet connection (like from Chile) or from a mobile device
\item The same displayed features as the actual working system are still available
\end{itemize}
If working on new use cases, the measurements should be:
\begin{itemize}
\item The same displayed features as the actual working system are still available
\item New features (such as mobile app and new charts) are added in order to cover more use-cases
\end{itemize}
In both cases, an open source based solution is considered to be better than a commercial one.
\section{Life cycle model}
\label{sec:Scrum}
The group has chosen to use the Scrum methodology for the project. This is an agile method that focuses on incremental development. The customer is heavily involved in the development process, and is given frequent updates and demos of the progress of the team.
\subsection{Scrum cycle}
\begin{figure}[h]
\begin{center}
\includegraphics[height=150px,width=340px]{img/Scrum.jpg}
\caption{Scrum cycle}\cite{ScrumCycle}
\label{fig:gantt}
\medskip
\small
\end{center}
\end{figure}
In the scrum methodology, there is a set way to conduct a development process. The process starts by laying out a product backlog. This is a overview of what needs to be done to complete the entire product. It does not have to have very high detail, but it should be detailed enough to give a clear idea of how the team should proceed.
After the product backlog is written, the first sprint is planned. In this planning meeting, the team estimates the time needed for each task, and takes on the tasks needed to complete the goal set for the sprint. This process is done in cooperation with the customer. The customer may set the sprint goal, or get more directly involved in the selection of tasks.
When the team has chosen the tasks to be completed, the sprint is locked, and new tasks may not be added by the customer. This gives the development team a noise free environment, and allows them to focus on the task ahead.
In the sprint cycle, which typically lasts for around 3 weeks, the team has daily meetings to update on progress and challenges. This meeting is lead by the scrum master. The daily meeting is known as a standup, as the participants are expected to stand during the meeting. Every person is given about two minutes to give an update of their work. This update should answer three questions:
\begin{itemize}
\item What did I do yesterday?
\item What am i going to do today?
\item Are there any problems that can obstruct my progress?
\end {itemize}
When the sprint is completed, the customer is given a demo of what the team has completed. This should be all that was agreed on in the sprint planing meeting. The customer is now free to give new priorities to items in the product backlog before the team picks new tasks for the next sprint.
After the sprint, the team should conduct a retrospective. In the retrospective, the team members are to discuss the sprint that is just concluded, and list experiences into one of three categories:
\begin{itemize}
\item Things we should continue doing
\item Things we should stop doing
\item Things we should start doing
\end {itemize}
This is a process that is usually conducted together with the customer, as the customer can add valuable insight into the teams work from a business perspective. The goal of the retrospective is for the team to identify what helps them in their work, and what is hindering them. This way, it is possible to make changes that positively effects the team.
\subsection{Scrum adaptations}
\label{subsec:scrumadaptations}
As our development project is part of a university course that runs in parallel with other subjects, it is impossible to follow the standard scrum process. Given this, the group made some choices that diverges from the standard scrum methodology.
\begin{itemize}
\item Meetings are not conducted every day, but rather twice a week. This fits better with the schedule of the group members, and takes into consideration that there is no demand for work to be done every day.
\item Sprint length has been chosen as one week. This is short compared to the Scrum standard, but it provides the group with a motivational pressure, as well as making the sprints easier to estimate.
\item Product demos are set to every other week. This means that there will be sprints that are not presented to the customer. In choosing this, the group is able to have sprints that focus on the internal demands set forth by \gls{NTNU}, as well as making more progress between customer meetings.
\item Retrospectives at the end of each sprint would mean setting aside around 50-70 hours of work time for retrospectives. As the group is only in a position to change their own work habits, and can not make changes to the course, nor to the general work process of the customer, it was decided that retrospectives will be dropped. In addition, the customer will probably not be able to be present at retrospectives, lowering their value. Instead, team meetings and advisor meetings will take the place of retrospectives.
\item The scrum master is usually responsible for shielding the development team from the outside world. In this subject, it will not be possible to do this, as group members are required to attend advisor meetings and customer meetings. Thus, the scrum master will be more of a project manager, responsible for the teams progress and efforts.
\end {itemize}
These changes make the Scrum method well suited for use in the project. Although it is important to try to follow well tested methodologies to the letter, one can not allow them to block progress, or disturb the workflow of the team. When in a special work situation, as the group is in, this type of adaptations are necessary for the efficient use of any development methodology.
\subsection{Other alternatives}
While it was early clear that the team wanted to work in an agile manner, other methodologies of development could also have been used, and was discussed. In particular, Kanban and waterfall with feedback.
\paragraph{Kanban}
While Scrum is driven by estimations, and motivates by limiting the amount of time for each sprint, Kanban attempts to motivate by limiting the amount of concurrent tasks the team is allowed to have. For example, the team can set a limit of 4 tasks in development, 3 in \gls{QA} and 2 in test. This way, the team is motivated to finish tasks, as it will free up space for new tasks.
As none in the team is particularly proficient in Kanban, and most of the team has some knowledge of Scrum, Kanban did not seem like a wise choice. The biggest factor in not choosing Kanban was the fear of having to use a lot of time learning the methodology, and thus have less time left for the actual development.
\paragraph{Waterfall with feedback}
The waterfall methodology is one of the oldest development methodologies, and is usually viewed as a rigid, one way process. Every step must be completed before moving to the next. This means that the customer must clearly specify what they want before any development can start. This made the team reject the traditional waterfall model, as the customer is not clear on what they want.
An adaptation to the waterfall model is the waterfall model with feedback. This model follows the traditional waterfall methodology, but allows the team to move back to a previous phase when needed. It is further illustrated in figure \ref{fig:WaterfallFeedback}.
\begin{figure}[h]
\begin{center}
\fbox{\includegraphics[height=150px,width=300px]{img/Waterfall_with_feedback.png}}
\caption{Waterfall with feedback} \cite{waterfall:feedback}
\label{fig:WaterfallFeedback}
\end{center}
\end{figure}
Even with the addition of feedbacks, the team felt that the waterfall model did not fit the rapid development, and possible changes in scope and requirements, of the project.
\section{Requirements}
We are using a Scrum-like project management framework, and working \glspl{prototype} are expected to be produced at the end of every other sprint. Requirements have been prioritized by the customer on a scale from 1 to 100. Some of the requirements for our project are inevitably overlapping, and a significant part of the requirements must be accomplished in order to produce a working solution. This is reflected in the prioritization made by the customer. The requirements of the project are discussed in detail in chapter \ref{chap:Requirements}.
\section{Project plan}
This section describes the specific layout of the project. This project follows the Scrum methodology with some variants introduced, as discussed in section \ref{subsec:scrumadaptations}. The first five weeks were spent planning, pre-studying (which included a preliminary implementation of the critical part of the system), and writing the requirement specification. After this, the project was divided into sprints. We decided to make each sprint one week. At the end of the last sprint all the requirements should be accomplished.
\subsection{Phases}
After the initial phase of the project, and the production of the \gls{pre-study}, a total of 6 scrum sprints are planned:
\begin{itemize}
\item Initiation phase (pre-planning)
\item Project Planning
\item \Gls{pre-study} Sprint (Sprint 0)
\item Sprint 1
\item \Gls{pre-delivery} of report (17th October)
\item Sprint 2
\item Sprint 3
\item Sprint 4
\item Sprint 5
\item Sprint 6
\item Finish report
\item Prepare for presentation
\item Final Delivery and demonstration
\end{itemize}
In figure \ref{fig:gantt} the Gantt diagram for our project is presented:
\begin{figure}[h]
\begin{center}
\includegraphics[height=130px,width=440px]{img/gantt.png}
\caption{Gantt diagram}
\label{fig:gantt}
\medskip
\small
\end{center}
\end{figure}
\subsection{Activities}
\begin{description}
\item[Pre-planning] During the pre-planning activity, the group members get to know each other and understand the task. There are also the first meetings with the customer and with the advisor.
\item[\Gls{pre-study} and planning] The \gls{pre-study} and planning activity is the stage where the actual work on the project begins. The group explores solutions, determine which technologies will be used, and how the product will be realized.
\item[Documentation] The documentation activity represents time spent documenting the work effort, including implementation, research etc., and administrative tasks like status reports and documents for the meetings.
\item[Coordination] The coordination activity is accomplished throughout the project, and consists of activities to coordinate the work, such as meetings, internal emails, calls and messages.
\item[Implementation] The implementation activity consists of the implementation of the system. This includes the programming of both the \gls{back-end} and the \gls{front-end} part.
\item[Testing] The testing activity represents time spent testing the system. This includes integration testing, unit testing, functional testing and scenario-driven testing.
\item[Presentation] The presentation activity is the final presentation of the system and the delivery of the report.
\end{description}
\subsection{Milestones}
To mark progress in the project, as well as making sure the team is clear on important deadlines, a set of milestones was set. Some of these are predefined in the course, and some are set by the group.
The following milestones have been defined:
\begin{description}
\item[October 6th] \Gls{pre-study} phase to be completed
\item[October 17th] \Gls{pre-delivery} of report (defined by the course coordinators)
\item[November 17th] Project report to be completed
\item[November 20th] Final presentation day (defined by the course coordinators)
\end{description}
\subsection{Lectures}
In this projects there have been a number of guest lectures, and we have made sure that every lecture was attended by at least one member in the team.
The "Group Dynamics" lecture was mandatory, and everyone in the group attended it.
The other lectures that have been attended are:
\begin{itemize}
\item "How to sell in large application projects", Thomas B. Pettersen - Computas (01.09.2014)
\item "Scrum, agile development method", Torgeir Dingsoyr - SINTEF (02.09.2014)
\item "Estimation, agile/practical project work", Fredrik Bach - BEKK, (02.09.2014)
\item "Project management", Stian Mikelsen - Bearingpoint (15.09.2014)
\item "Technical Writing in English", Stewart Clark - \gls{NTNU} (24.09.2014)
\item "Sales techniques with exercises in groups", Morten Selven - Mikos (01.10.2014)
\end{itemize}
\section{Project Organization}
\subsection{Organizational diagram of how the group is organized}
In figure \ref{fig:organizational-structure}, the structure of the group is shown.
\begin{figure}[h]
\begin{center}
\includegraphics[height=260px,width=328px]{img/tdt4290_group_6_organizational_structure.png}
\caption{Organizational diagram.}
\label{fig:organizational-structure}
\medskip
\small
The black lines indicate bi-directional communication. The grey lines denotes the preferred way of communication between components. Note that the layout is not hierarchical and it is arranged this way only to better fit on the printed page.
\end{center}
\end{figure}
\subsection{Roles and corresponding responsibilities}
To have a clear division of responsibilities, the group chose to assign central roles early in the project. This was agreed upon through a combination of the wishes of each group member, and their technical insights and personal qualities.
The group decided on the following roles, and their definitions:
\begin{description}
\item[Scrum master] The scrum masters task is to make sure that the group meets its goals, he\footnote{"He" should be read as "he or she" throughout this report} is the leader of meetings, makes sure that the groups meetings have a structure, and that they finish on time. It is also the scrum masters task to make sure that the team meets deadlines, and finishes tasks.
The scrum master is in standard scrum methodology tasked with being a barrier between the team and the surrounding environment. He should make sure that any distractions and uncertainties are taken care of, and let the rest of the development team focus purely on their tasks. In our project, this part of the scrum masters role has been difficult to implement, as the course requires all students to take part in activities that usually would be the scrum masters responsibility.
\emph{Role assigned to: \textbf{Ondrej Hujnak}}
\item[Customer contact] The customer contacts responsibility is to arrange customer meeting, forward customer emails to the group if needed, and ensure that all the needed communication with the customer is done. As far as possible, no other members of the team should contact the customer directly. This will ensure that there is no double communication, as well as giving the customer a single contact in the team.
\emph{Role assigned to: \textbf{Marco Radavelli}}
\item[\gls{QA} and testing responsible] Ensures that the implementation fulfills the requirements, designs the test plan, defines and ensures that the Quality Assurance standard is followed during the project. Is responsible for making sure that sufficient testing is conducted.
\emph{Role assigned to: \textbf{Emil Jakobus Schroeder}}
\item[Documentation responsible] Supervises the structure and the content of all the documents the group produces during the project. This includes the \gls{pre-study}, \gls{pre-delivery} and final report. Is also the person responsible for taking minutes during meetings, and making sure that they reflect accurately what was said during the meetings.
\emph{Role assigned to: \textbf{Hans Kristian Henriksen}}
\item[Advisor contact] Arranges the advisor meetings, makes sure that all required documents are sent to the advisor before each meeting. Will, as far as possible, be the only person from the group communicating directly with the advisor on group specific matters.
\emph{Role assigned to: \textbf{Hans Kristian Henriksen}}
\item[\Gls{front-end} leader] The \gls{front-end} leader supervises the \gls{front-end} architecture and implementation, and coordinates the \gls{front-end} developer team. He is expected to have an overview of the process and plan ahead in the case of problems or blocked tasks.
\emph{Role assigned to: \textbf{Anders Smedegaard Pedersen}}
\item[\Gls{back-end} leader] Has the same role as the \gls{front-end} leader, but focuses on the \gls{back-end} team.
\emph{Role assigned to: \textbf{Arve Nygård}}
\item[System architect] The system architect makes sure that there is consistency between requirements, design and implementation, and that the design is feasible and reasonable.
\emph{Role assigned to: \textbf{Ruben Håskjold Fagerli}}
\end{description}
\subsection{Weekly schedule}
We decided to adopt a scrum-like model of software development, with internal group meetings twice a week, weekly meetings with the advisor and meeting with the customer when needed (typically every other week). The schedule is defined as follows:
\begin{itemize}
\item Mondays 2-3 pm - Advisor meeting
\item Mondays 3-4 pm - Team meeting
\item Thursdays 12-2 pm - Team meeting
\end{itemize}
For weeks 37 and 38 the advisor meeting will be Thursday 4pm.
\section{Risk assessment}
For a project this size and duration, there are a multitude of risks that needs to be taken into consideration. The team has identified the risks they feel are most relevant for the project, and attempted to classify them. The risks have two classes, probability and consequence. \textit{Probability} is the likelihood of the risk taking effect, while \textit{consequence} is the severity of the risk if it takes effect. We have assigned each risk class one of three ratings, \gls{L}, \gls{M} and \gls{H}. For each risk, a strategy has been developed to lower both the probability and consequence.
An overview of risks and strategies is given in the following table:
\begin{longtable}{p{0.7cm} p{2.5cm} p{0.7cm} p{0.7cm} p{6.5cm} }
\caption[]{Risk assessment}\\
\multicolumn{1}{p{0.7cm}}{ID} &
\multicolumn{1}{p{2.5cm}}{Problem desc.} &
\multicolumn{1}{p{0.7cm}}{Prob.} &
\multicolumn{1}{p{0.7cm}}{Sev.} &
\multicolumn{1}{p{6.5cm}}{Action}
\endhead
\caption[Risk assessment]{} \label{riskAss} \\
\hline
\multicolumn{1}{p{0.7cm}}{ID} &
\multicolumn{1}{p{2.4cm}}{Problem description} &
% \multicolumn{1}{p{0.8cm}}{\parbox[t]{0.8cm}{Proba-bility}} &
% \multicolumn{1}{p{0.8cm}}{\parbox[t]{0.8cm}{Cons-equens}} &
\multicolumn{1}{p{0.7cm}}{Prob.} &
\multicolumn{1}{p{0.7cm}}{Sev.} &
\multicolumn{1}{p{6.5cm}}{Action}
\endfirsthead
\hline
\multicolumn{5}{r}{{Continued on next page}} \\
\endfoot
\hline \hline
\endlastfoot
\hline
%\begin{tabularx}{\linewidth}{lXllX}
R1 & Personal Conflicts & \gls{M} & \gls{M} & Might affect the quality of the work, motivation and time required. Probability can be reduced by ensuring communication within the group, and democratic and motivated decisions. \\ \\ \hline
R2 & Assignments given in other courses & \gls{H} & \gls{M} & Time for some students could be less than planned during a few weeks, due to demanding assignments of other courses. Mitigated by planning, frequent workload balancing and contributing with extra work in weeks without as intensive demands. \\ \\ \hline
R3 & Conflicting schedules & \gls{H} & \gls{M} & Reduced by planning meeting-times and communication channels for times between group meeting. Also, accepting that we can not meet every day and that all group members do not always need to be present. \\ \\ \hline
R4 & Overcomplicated solution & \gls{M} & \gls{M} & Probability reduced by good communication both with the customer and internally in the group. \\ \\ \hline
R5 & Technical problems and difficulties & \gls{M} & \gls{H} & To reduce the probability: make a detailed study (and \glspl{prototype} where possible) during the \gls{pre-study} phase. To reduce the consequence: find the person/people within the group which are more comfortable with those new technologies and assign them the particular tasks. Knowledge transfer within the team is also a way to reduce the consequences of this risk. Keeping the customer informed is vital. \\ \\ \hline
R6 & Missing room reservations & \gls{H} & \gls{L} & Will delay work. Avoid by making sure we have a regular room. The advisor might be able to book a permanent room. \\ \\ \hline
R7 & Data loss & \gls{L} & \gls{H} & Limited by introducing a backup strategy, and making sure that all team members follow this strategy. \\ \\ \hline
R8 & Assigned tasks are not completed & \gls{H} & \gls{M} & Consequence reduced by additional work for other group members in order to try to complete the task within the deadline. Talk within the group to find a compromise. Continuous communication between team members will help identify this risk early. \\ \\ \hline
R9 & Illness & \gls{H} & \gls{M} & Probability cannot be reduced, but consequence can be reduced by temporary increase the work for other members of the group. One person should not be assigned tasks in a way that makes other team members unable to cover for them. \\ \\ \hline
R10 & Team members are late, or do not show, to meetings & \gls{H} & \gls{M} & Probability reduced by defining clear rules and schedules for the meetings. \\ \\ \hline
R11 & Team members cannot find meeting room for the meeting or misstake date and time & \gls{H} & \gls{M} & Probability reduced by creating a calendar shared by the group members and constantly updated. Ensure communication within the team. Reserve the same room and keep the same schedule for meetings. \\ \\ \hline
R12 & Cannot complete the product backlog or sprint backlog & \gls{M} & \gls{H} & Reduced by making sure to follow the priority of the tasks. The scrum master has a special responsibility to identify this situation early. \\ \\ \hline
R13 & Mistakes in the documentation or in the final product & \gls{M} & \gls{M} & Probability reduced by ensuring that every deliverable is reviewed and checked by at least one different person than the one who wrote it. Make sure to dedicate resources for software testing and review of documents. \\ \\ \hline
R14 & Malfunction of personal computer equipment & \gls{L} & \gls{M} & Probability cannot be reduced. Consequence can be reduced by increase work for other team members, and assign team members with malfunctioning equipment tasks that do not require a specific tool to be installed, so that university computers can be used for the work. \\ \\ \hline
%\end{tabularx}
\end{longtable}
\section{Quality Assurance}
As stated in the \gls{course compendium}, we need to adopt Quality Assurance (QA) standards into our project. Therefore we stipulated an agreement with our customer on times of response for the following situations. In brackets the agreed time limit is presented:
\begin{itemize}
\item Approval of minutes of customer meeting (24 hours)
\item Feedback on phase documents the customer would like for review (48 hours)
\item Approval of phase documents (48 hours)
\item Answer to a question (24 hours)
\item To get agreed documents etc (24 hours)
\item How long in advance the calling for meeting should be sent (at least at 12:00 two working days before the meeting is going to take place)
\end{itemize}
\chapter{Tools and technology}
\label{chap:ToolsAndTech}
\section{Documents}
\subsection{\LaTeX}
\LaTeX~is a typesetting system and document markup language that became standard for scientific documents. It is easily expandable by thousands of different packages and can handle all aspects of scientific papers.
We have chosen to use \LaTeX~for our report for two main reasons - \LaTeX~sources are easily available and, because they are simple text files, they can be easily versioned by various version control systems. The second reason is focus on content, not on form. In \LaTeX~sources there is very little information about the exact layout of the page. \LaTeX~itself during compile time chooses the best position of elements, and complies with all typographic norms.
We have created a template in our shared space together with a bibliography file. All team members are then able to write his sections in an environment that suits him the best, while the current state of the report is always available to all members.
\subsection{Google Docs}
Google Docs \footnote{\url{https://docs.google.com}} is a web based office suite including a text editor, a spreadsheet program and a presentation program. All files created in these programs can easily be shared with collaborators. By sharing files, the collaborators get access to view and edit the files. It is also possible to edit and comment on other's work. We decided to use Google Docs for all documents that did not require the advanced typesetting of \LaTeX~so we had a common platform for such documents. This saves time, while still keeping documents backed up and accessible.
\section{Project management}
\subsection{Trello}
\label{subsec:Trello}
Trello\footnote{\url{http://www.trello.com}} is a web-based collaborative project management tool originally made by Fog Creek Software (New York, USA)\footnote{\url{http://www.fogcreek.com/}}.
It is based on the Kanban method which has first implemented by Toyota in 1953 to be used in car production. It has since been modified to be used in several different industries.
David J. Anderson formulated a model based on Kanban for knowledge based work, specifically software development, where the team incrementally pulls work from a queue \cite{da2004}.
We use Trello for organizing our tasks into four states: "To do", "Doing", "Blocked" and "Done". The default state is "To do", and by changing the state of a task everybody in the group knows what needs to be done, what is being worked on, what tasks are dependent on other tasks or factors, and what tasks are done.
This is a simple yet efficient way of managing tasks. We chose Trello for the ease of use and the fact that it takes very little time to learn to use. In comparison, a system like Jira \footnote{\url{http://https://www.atlassian.com/software/jira}} has more features that might have been useful, but it takes more time and effort to learn. Therefor we have opted for Trello.
Trello is a freemium web service which means that it is free to use but additional support and features can be accessed if you pay a fee. As we only needed the standard functions we used the free version.
\subsection{Slack}
\label{subsec:Slack}
Slack is a web based team communication tool founded by Stewart Butterfield. It offers text chat in different channels and integration with a number of different popular services used by development teams\footnote{\url{https://slack.com/integrations}}. This has been useful to us since we needed to share information that might be more relevant to specific team members and also to have a single means of communication. Using Slack's integration with Trello and Github the team can be notified when there are changes on these platforms as well. Slack's capability to share files, or link to files on Google Documents, will also be useful.
\section{Version control}
\subsection{Git}
Git is a distributed version control system developed in 2005 by Linus Torvalds and Linux development community \cite{ProGit}. Git was made to be small, fast and easy to use especially for code management, as its main purpose was versioning of Linux kernel source code. Nowadays, Git is one of the most used versioning controls systems in software development, thanks to its open license and powerful features.
We have chosen Git because some members already know it and are able to work efficiently with it. Another advantage is easy branching and distributed architecture that allows you to work offline.
We have created an organization on GitHub\footnote{\url{https://github.com}} with multiple repositories for different separate parts of our work - reports, server sources, client sources. We have chosen GitHub because it is well known, and because of the git hosting server that offers advanced features and stability. Moreover, some members already had accounts on GitHub and were familiar with the interface, which shortened time needed to setup a working environment.
\section{Programming and markup languages}
The choice of implementation language, platform and tools to use is left up to the group. The customers only requirement is that the solution should be able to run in their environment.
\subsection{Java}
Java is a popular programming language developed by a team led by James Gosling at Sun Microsystems in 1991. It is full-featured, general purpose, language capable of developing robust mission-critical applications \cite{liang}. This and the fact that everybody in the group had at least basic knowledge of Java led to the decision that we would use Java for our \gls{back-end} application.
\subsection{JavaScript}
As stated by Davis Flanagan in "JavaScript: the definitive guide":
\begin{quote}
"JavaScript is part of the triad of technologies that all Web developers must learn".
\end{quote}
He continues to note that JavaScript specifies the behavior of the web page \cite{fd11}. Along with specification of \gls{HTML5} and ECMAscript 6 (the standard name of JavaScript) the possibilities of what you can achieve with JavaScript has greatly improved. Since our assignment is to create a web based solution, it seams natural to use JavaScript as part of the solution. This is further supported by the existence of \gls{open source} libraries designed to make interactive maps, which is relevant for our assignment.
\subsection{HTML}
\gls{HTML} is a markup language used to create web sites. It was created at CERN to share documents and was later popularized for creating documents for the World Wide Web. As we are making a web based solution we will need \gls{HTML}, either compiled from another language or written directly.
\subsection{SASS}
\gls{SASS} is a \gls{CSS} extension language that makes it easier to modularize style sheets and thus keep the styling more maintainable. It also introduces variables, nesting, mixins and inheritance to style sheets. These features also improve the maintainability of the style sheets. The syntax of \gls{SASS} is very similar to \gls{CSS} which makes it easy to learn if you know \gls{CSS}. \gls{SASS} is a so-called preprocesser which compiles the \gls{SASS} code into regular \gls{CSS}. This means that the web-server or client does not need to have extra software installed to handle it.
\section{Index Database}
\subsection{Spatialite}
Spatialite is an extension for SQLite databases, turning them into spatial databases. Spatial database is a term for a database optimized for storing and querying geometric data. The \textit{\gls{netcdf} indexer} makes use of a spatialite database for both of these purposes. It is published under a free license, lightweight and has the functionality required by our \textit{\gls{netcdf} indexer}.
Spatialite supports several types of geometries, we will be using three:
\begin{itemize}
\item A POLYGON is a two dimensional figure defined by its corner points. The areas covered by \gls{netcdf} files and the areas of requests the back-end receive will be POLYGONs.
\item A LINESTRING is a sequence of points in two dimensions, defining a line. The span of time covered by each \gls{netcdf} file will be represented by a LINESTRING with two points.
\item A POINT is a single point in two dimensional space. A request will be for a single point in time.
\end{itemize}
Spatialite supports set operators for all the types of geometry, we will be using intersects(). This function, used in SQL statements, takes two geometries of any kind as arguments. It returns "1" if any point exists that is part of both geometries.
Spatialite can build and maintain R-tree indexes for geometry columns, allowing it to faster resolve calls to functions like intersects(a,b). It does this by first checking if the minimum bounding rectangle of the geometries intersect, discarding any that do not before a more advanced check is performed.
Short for rectangle tree, an R-tree is a data structure to hierarchically organize spatial data. The idea is to organize the spatial information into rectangles, which are contained in larger rectangles and so on. In this way only a small part of the tree will be traversed when it is searched. It has several variants with slightly different performances for construction and other operations.
\chapter{\Gls{pre-study}}
In this chapter, we present the findings of the \gls{pre-study}. The \gls{pre-study} document was made as a separate document that was meant to be delivered to the customer independently of this report. Therefore, there are overlapping sections with the full report. These sections will not be given in this chapter, but are instead presented in their respective chapters of the full report.
\section{Background}
To get an understanding of the customers needs, as well as studying different solutions, a \gls{pre-study} is conducted. From the \gls{course compendium}:
\begin{quote}
The preliminary studies are vital for the group to obtain a good understanding of the total problem.
Here, you will have to describe the problem at hand. You should describe the current system and the
planned solutions (...).
\cite{TDT4290:Intro}
\end{quote}
\section{Current situation at SINTEF}
In this chapter we will explore the solution SINTEF is currently using, and the challenges and limitations it poses. After looking into this, we will describe the evaluation criteria that will be used to assess the alternative solutions the group has found.
\subsection{Current system}
\begin{figure}[h]
\begin{center}
\fbox{\includegraphics[height=223px,width=396px]{img/region_interface_sinmod.png}}
\caption{The main interface for a region in SinMod}
\label{fig:sinmod-region-main-interface}
\end{center}
\end{figure}
The current system deployed at SINTEF serves their clients by providing access to a collection of more than 100 000 pre generated PDF files. These files contain information on currents, salinity, and temperature. The user may choose what information he wants by selecting parameters in the drop down menus, see figure \ref{fig:sinmod-region-main-interface}.
\begin{figure}[h]
\begin{center}
\fbox{\includegraphics[height=223px,width=396px]{img/site_key_data.png}}
\caption{The key data a user is presented with when selecting a specific site}
\label{fig:sinmod-site-key-data}
\end{center}
\end{figure}
If the user chooses a specific site from the map or location drop down, he will be presented with key data for this area. This includes statistical information such as maximum current speed, average current speed and so forth, as well as geographical position, as given in figure \ref{fig:sinmod-site-key-data}.
\begin{figure}[h]
\begin{center}
\fbox{\includegraphics[height=300px,width=300px]{img/site_graphs.png}}
\caption{Some of the graphs presented to the user when selecting a specific site}
\label{fig:sinmod-site-graphs}
\end{center}
\end{figure}
The system will also present a set of pre defined graphs, including current roses, tidal ellipsis and vertical profiles. The graphs are given for standard attributes (e.g depth 2 meters), and for some of them, there is an option of downloading a PDF containing graphs for other values of the given attributes. An example is shown in figure \ref{fig:sinmod-site-graphs}
\subsection{Challenges}
As the PDFs are pre-generated, there is a clear limitation to what information the user may request. If a user wants to know, for example, the connection between salinity and current speed at a given location, the user must download two different PDFs and manually compare these.
The graphs given for a specific location are only given for limited values of the critical attributes. If we look at the current rose, it is presented for a depth of 2 meters. If the user is really interested in the current rose for 10 meters, he has to download the PDF containing all possible current roses.
The same is true for the maps that can be generated for a specific site. The user may choose period (a single month may be selected), and one of the five variables. This gives the user a PDF with one map for each depth that can be calculated.
For a user who knows what data is interesting, this is a complicated and data heavy way of delivering information. The PDFs seems to range in size from 125kB to around 3MB, depending on what information is requested. The region maps are the absolute largest in file size, ranging from 1 to 3MB, while the files containing the current roses are quite small, in the 100-200kB range.
On a computer with broadband connection, the size of the files is not very problematic. For these users, the biggest challenge is the fact that the user cannot specify what kind of data they want plotted, and have to look through quite a lot of pages to get the information needed. For a user on a low bandwidth connection and/or on a mobile device, the size of the files is a more pressing problem. On an \gls{EDGE} connection the theoretical best download time for a 3MB file is 62,5 seconds at 384kbit/s \cite{3gpp.com}.
\subsection{Evaluation criteria}
SINTEF's main goal with this project is to be able to rid themselves of the PDF store, and generate the information on demand. This will make for a much more flexible system, where it will be possible to add new graphs and functionality quickly. For the customers, it will make it possible to request more customized graphs and plots.
SINTEF has presented the group with several goals they wish the new system to fulfill:
\begin{itemize}
\item The system should generate the graphs and maps directly from the \gls{netcdf} files on user request.
\item The user should be able to select several variables for one plot.
\item The system should be usable on low bandwidth connections.
\item The system should be usable on mobile devices.
\item The system should be easy to expand with new functionality.
\end{itemize}
With these goals in mind, the group started looking into different technologies that could be used to make such a system.
%%%
\section{Other production solutions}
The group has used the first part of the project investigating what solutions would best suit SINTEF's needs. In the chapter we present the different solutions we have found, along with an assessment on how each solution is rated in accordance to the evaluation criteria.
%Existing tech - Italian solution
\subsection{Adriatic Forecasting System}
\emph{Link: \url{http://oceanlab.cmcc.it/afs/}} \\%make sure to use //
The solution, by the Operational Oceanography Group Italy and cmcc Ocean-Lab, can display temperature, salinity, currents, sea surface height, wind stress and heat flux. It allows the user to choose date, region and depth as search filters, and uses \gls{PNG} overlays on a Google map. The \gls{PNG}s are retrieved from cmcc's own server.
Although it graphically looks quite nice, it can be seen that some tiles are not precisely overlapping. In addition, sometimes it is needed to refresh the page because the application does not load properly. \gls{PNG} layers are displayed without a specific JavaScript library, and the JavaScript code is quite complex compared to other existing solutions using libraries. Therefore this solution is not well re-usable.
\\ \emph{Overall rating: \textbf{Ok}}
\subsection{Danish Centre of Ocean and Ice}
\emph{Link: \url{http://ocean.dmi.dk/anim/index.uk.php }} \\%make sure to use //
The solution by the Danish Center of Ocean and Ice can display temperature, salinity and current, which are the most important factors of the SINTEF simulation. This said, it lacks the ability to choose depth and specify a date interval. The data is shown as static \gls{PNG}s, thus the map is not interactive. There is, on the other hand, a possibility to choose different geographical areas with the highest level of detail around Denmark. This is on the same level as SINTEF's existing solution.
\\ \emph{Overall rating: \textbf{Bad}}
\subsection{Fisheries and Oceans Canada}
\emph{Link: \url{http://www.tides.gc.ca/eng}} \\%make sure to use //
The solution of the Canadian government resembles the Danish one. It is possible to choose a geographical area on a static map. By choosing an area you get the opportunity to choose a smaller, more specific area. The big difference is that all data is presented as text in tables, thus making it less convenient and intuitive to use.
\\ \emph{Overall rating: \textbf{Bad}}
\subsection{Ocean viewer}
\emph{Link: \url{http://www.oceanviewer.org}} \\%make sure to use /
Ocean Viewer is a pilot project of the \gls{MEOPAR} of Canada. It gathers data from different sources and displays it as \gls{PNG}s overlayed on a map. You can select different geographical areas on a customized Google Map and different data from a menu (temperature, salinity and others). Like the Danish solution the \gls{PNG}s can be shown in sequence to illustrate changes over time.
\\ \emph{Overall rating: \textbf{Ok}}
\subsection{Sea temperatures and Currents - Bureau of Meteorology}
\emph{Link: \url{http://www.bom.gov.au/oceanography/forecasts/}} \\%make sure to use //
The Australian Bureau of Meteorology has a solution that is very similar to the other national agencies. You can choose a geographical area on a static map. Here as well, the data is visualized with images overlayed on a static map, with the possibility to loop through the images to show changes in the data over time.
\\ \emph{Overall rating: \textbf{Ok}}
\subsection{yr.no Map Service}
\emph{Link: \url{http://yr.no/kart}} \\%make sure to use //
This solution presents the user with a conventional map interface. On the sides and top there are menus for selecting which variable and time step should be displayed. The user is only allowed to select a time step about 8 days from the current time. Interesting variables include sea temperature, salinity and sea currents, each has its own. There is no way to select depth, and all data seems to be for the surface values. The chosen variable is added as an overlay of \gls{PNG} tiles using OpenLayers. The tiles are fetched using \gls{WMS} from a norwegian meteorological service server. On the map there are several measurement stations displayed, that give information when clicked. In addition, clicking anywhere on the map displays several plots predicting the next two days of weather for that spot.
\\ \emph{Overall rating: \textbf{Ok}}
\section{Back-end}
\subsection{GeoServer}
Does not support \gls{netcdf} natively. There exist a community plugin that enables you to read from \gls{netcdf} files, but the support seems very shifty. GeoServer does not seem to have support for more than one file and metadata appears difficult to extract. Due to these factors, the system was deemed to not to meet the needed criteria.
\subsection{THREDDS}
\begin{quote}
The \gls{THREDDS} Data Server (TDS) is a web server that provides metadata and data access for scientific datasets, using \gls{OPENDAP}, \gls{OGC} \gls{WMS} and \gls{WCS}, \gls{HTTP}, and other remote data access \glspl{protocol}. \cite{TDS:Web}
\end{quote}
Except supporting multiple data access \glspl{protocol}, \gls{TDS} is able to virtually aggregate multiple \gls{netcdf} files to one dataset that can be used for queries, such as selecting a region and sending its data in specified format. Dataset configurations are done via \gls{NCML}, which is a dialect of \gls{XML}.
Although \gls{TDS} seems to support everything that is needed for this project, is installation, and especially configuration for aggregation and special needs, is not trivial. Moreover there are doubts about speed and dealing with serving a range containing large quantified points. One advantage is that SINTEF currently has a \gls{TDS} running and configured, so we will not need to configure it from scratch, and SINTEF employees probably have experience and knowledge about setting it up, something which the group lacks.
\subsection{Custom solution}
An alternative is to write a custom \gls{back-end} from scratch.
Advantages of this are:
\begin{itemize}
\item Easy deployment - written as a single service that can be launched easily.
\item Load balancing - Easily scale outwards: The servers can be put behind a load balancer.
\item Speed - No overhead for unused features.
\item Code clarity - No overhead for unused features.
\end{itemize}
A skeleton for the whole server is in place. Below is a list of features that the group feels are within reach in the project period.
\begin{itemize}
\item Indexing and file selection.
\item Projection (Mapping between latitude/longitude and file indices)
\item Filtering (Reducing the result set before reading the file)
\item Output
\item Rendering to image
\item Rendering to GeoJSON
\end{itemize}
Performance seems to be very good at the current level of implementation. The only major potential bottleneck the group sees is reading files from a hard-drive. (All testing has been done on \gls{SSD}). This bottleneck will however be independent of \gls{back-end} solution.
\subsection{MapTiler}
\emph{Link: \url{http://www.maptiler.org/}} \\%make sure to use //
MapTiler is an application for online map publishing. It makes it possible to create tiles that can be overlayed over other maps like Google Maps, Open Street Map and others. It is written in C/C++ and claims to be a lot faster than other existing solutions. A draw back seems to be that it is made for overlaying a pre generated directory of images rather than dynamic data like the ocean forecast data.
\\ \emph{Overall rating: \textbf{Ok}}
\subsection{ncWMS}
\emph{Link: \url{http://www.resc.rdg.ac.uk/trac/ncWMS/}} \\%make sure to use //
ncWMS is an \gls{open source}, free to use, java server application. It was created to support interactive browsing of gridded four-dimensional \gls{netcdf} data over the web. Clients will send request containing the wanted coordinates (latitudes, longitudes, depth and time), what variable is to be displayed, projection, format and size the response should be. ncWMS, which has been configured to read from datasets (for example sets of \gls{netcdf} files or a \gls{THREDDS} server), responds to a request with an image of the desired type. ncWMS adheres to the \gls{WMS} specification (\gls{WMS} 1.3.0 and 1.1.1 are supported).
\paragraph{Configuration}
ncWMS is mainly configured through a web interface, where you can add datasets and change server settings. A dataset can be: a single \gls{netcdf} file, an \gls{OPENDAP} endpoint (a service provided by \gls{THREDDS}), a \gls{NCML} file or a glob aggregation (using wildcard characters like ). It is possible to configure which variables of a dataset to expose and how. Each dataset can also be set to automatically refresh at certain intervals. The server can be set to cache data, to reduce \gls{CPU} load at the expense of memory and disk space.
\paragraph{Aggregation}
The glob aggregation or \gls{NCML} work well with files that cover the same area, but contain different timesteps. To handle different areas and different resolutions it may be possible to use \gls{THREDDS}/\gls{OPENDAP} or \gls{NCML}.
\paragraph{Performance}
Testing both local and publicly available ncWMS servers the performance seems good. When using the godiva2 browser based client or sending individual requests the server mostly responds quickly, with some idiopathic exceptions. Transmitting the map data to the client as \gls{PNG} images should be a bandwidth-efficient way to do it, as well as moving the computation load away from the client. When starting the server application or adding a dataset, the server needs some time to load some information from the datasets. This only takes a few seconds, even for a few hundred GB of local \gls{netcdf} files.
\paragraph{Summary}
ncWMS does a lot of what we need the \gls{back-end} of our solution to do. It handles the extraction, downsampling and projection of the data, creating an image ready to be used in a map widget or on its own. It can only handle requests for a few kinds of plots. It is \gls{open source}, and is free to use under a modified \gls{BSD} license. To serve all the plots and data required it would need modification.
\\ \emph{Overall rating: \textbf{Good}}
%%%
\section{Transmission Protocols}
There are several standardized \glspl{protocol} to present map data from the server to the client in order to dynamically display layers on a map.
In particular, two main kind of representations can be distinguished, and sometimes they are both supported by a single standard:
\begin{description}
\item[Vector based layers] Data is sent from the server to the client in a textual format, such as GeoJSON
\item[Image based layers] Data is sent from the server to the client as images, such as \gls{JPEG}, \gls{PNG} and \gls{GIF}
\end{description}
The \acrfull{OGC} became involved in developing standards for web mapping after a paper was published in 1997 by Allan Doyle, outlining a "WWW Mapping Framework". The oldest and most popular standard for web mapping is \gls{WMS}. However, the properties of this standard proved to be difficult to implement for situations where short response times were important. For most \gls{WMS} services it is not uncommon to require 1 or more \gls{CPU} seconds to produce a response. For massive parallel use cases, such a \gls{CPU}-intensive service is not practical. To overcome the \gls{CPU} intensive on-the-fly rendering problem, application developers started using pre-rendered map tiles. Several open and proprietary schemes were invented to organize and address these map tiles.
In order to reduce the performance problems of \gls{WMS}, new standards have been defined:
\begin{itemize}
\item TMS
\item WMS-c
\item WMTS
\end{itemize}
\subsection{WMS}
\acrfull{WMS} is a standard \gls{protocol} for serving geo-referenced map images over the Internet that are generated by a map server using data from a GIS database. The specification was developed and first published by the Open Geospatial Consortium in 1999.
A \gls{WMS} server usually serves the map in a bitmap format, e.g. \gls{PNG}, \gls{GIF} or \gls{JPEG}. In addition, vector graphics can be included: such as points, lines, curves and text, expressed in SVG or WebCGM format.
The \gls{WMS} standard allows flexibility in the client request enabling clients to obtain exactly the final image they want. A \gls{WMS} client can request that the server creates a map by overlaying an arbitrary number of the map layers offered by the server, over an arbitrary geographic bound, with an arbitrary background color at an arbitrary scale, in any supported coordinate reference system. \cite{WMS:Web}
\subsection{TMS}
\gls{TMS} is a specification for storing and retrieving cartographic data, developed by the Open Source Geospatial Foundation. The \gls{TMS} \gls{protocol} fills a gap between the very simple standard used by OpenStreetMap and the complexity of the \gls{WMS} standard, providing simple \gls{URL} to tiles while also supporting alternate spatial referencing system.
\gls{TMS} is most widely supported by web mapping clients and servers, and it is served as the basis for \gls{WMTS} (the OpenGIS Web Map Tile Service \gls{OGC} standard). \cite{TMS:Web}
\subsection{WMS-c}
The WMS Tiling Client Recommendation, or WMS-C for short, is a recommendation set forth by OSGeo for making tiled requests using \gls{WMS}. It is just a recommendation on using \gls{WMS} properly in order to improve performance by caching data.
This recommendation relies on two basic concepts to support this purpose: First, ability to cache map imagery can be improved by using image tiles of fixed width and height, referenced to some fixed geographic grid at fixed scales. A valid tile request is one that conforms to the specification of fixed image parameters and geographic grid(s) for a given layer. By analogy, an invalid tile request is one that does not.
Second, caching of \gls{HTTP} GET requests is further made possible by constraining the \gls{URL} parameters used in the request. This recommendation identifies the \gls{WMS} GetMap parameters minimally needed for a client to request a valid tile. \cite{WMSc:Web}
\subsection{WMTS}
\gls{WMTS} is a standard \gls{protocol} for serving pre-rendered geo-referenced map tiles over the Internet. The specification was developed and first published by the Open Geospatial Consortium in 2010
\gls{WMTS} builds on efforts to develop scalable, high performance services for web based distribution of cartographic maps. To define this standard, similar initiatives were also considered, such as Google maps and NASA OnEarth. \gls{WMTS} includes both resource (RESTful approach) and procedure oriented architectural styles (KVP and SOAP encoding) in an effort to harmonize this interface standard with the OSGeo specification.
\gls{WMTS} complements earlier efforts to develop services for the web based distribution of cartographic maps. The \gls{OGC} \gls{WMTS} provides a complementary approach to the \gls{OGC} \gls{WMS} for tiling maps. \gls{WMS} focuses on rendering custom maps and is an ideal solution for dynamic data or custom styled maps. \gls{WMTS} trades the flexibility of custom map rendering for the scalability possible by serving of static data (base maps) where the bounding box and scales have been constrained to discrete tiles. The fixed set of tiles allows for the implementation of a \gls{WMTS} service using a web server that simply returns existing files. The fixed set of tiles also enables the use of standard network mechanisms for scalability such as distributed cache systems. \cite{WMTS:Web}
\subsection{Summary}
WMS-C is the best supported and most mature \gls{protocol}, but it is a bit of a kludge overlayed on top of \gls{WMS} to support tiles and it incurs some extra overhead from having to use world coordinate bounding boxes rather than tile coordinates.
\gls{TMS} is fairly mature, and is specifically designed for tiles, but is not an official \gls{OGC} spec.
\gls{WMTS} is an \gls{OGC} spec that is meant to replace \gls{TMS} and WMS-C. It works purely in tile coordinates like \gls{TMS} (although it computes them differently) but has some additional capabilities that were not in \gls{TMS}, like GetFeatureInfo. It is comparatively recent, but it is becoming more and more used, even if its implementations are less mature. It is also supported by OpenLayers.
%%%
\section{Front-end solutions}
A central part of the product requirement is to display simulated data in a dynamic manner. There exists solutions to do just that, and JavaScript libraries that makes it relatively easy to create at \gls{front-end} solution of our own. In this section we will discuss and compare the most relevant of these in the context of our assignment and the product requirements.
\subsection{Custom made solution}
A custom made solution has the general advantage that we can build it to specification and thus make sure it meets the requirements without having to deal with other people's code base. Trying to customize an existing solution might be as much work as building something from the bottom. In the following paragraphs we will review and rate technologies we found relevant for building a custom \gls{front-end}.
\subsubsection{LeafletJS}
\begin{tabular}{|p{4cm}|p{8cm}|}
\hline
Home page: & \url{http://www.leafletjs.com} \\
\hline
Service functionality: & Creating mobile-friendly interactive maps. \\
\hline
\end{tabular}
\paragraph{Introduction} \indent
LeafletJS ("Leaflet" for the rest of the section) is an \gls{open source} JavaScript library for creating mobile-friendly interactive maps. It is licensed under the \href{'https://github.com/Leaflet/Leaflet/blob/master/LICENSE'}{2-clause \gls{BSD} License}, which makes it free to use in commercial applications as long as a credit is added somewhere in the user interface.
Even though Leaflet is free to use it is dependent on a third party to provide the map tiles. These may not be free to use.
\paragraph{Features}
Leaflet has the features one would expect from a modern interactive map. This includes panning with inertia, zooming and the ability to add markers. It also supports double-tap and pinch to zoom for IOS and Android on mobile phones. Furthermore all the five biggest web browsers are supported, including graceful fallback for old versions.
The most powerful feature of Leaflet is the ability to add layers. The different supported layers are:
\begin{itemize}
\item Tile layers
\item Marker layers
\item Pop-ups
\item Vector layers
\item GeoJSON layers
\item Image overlays
\item \gls{WMS} layers
\item Layer groups
\end{itemize}
For our purposes the ability to get map tiles from different sources may be very interesting. This gives us the ability to for example show both nautical maps and regular land maps at the convenience of the user. At zoom levels covering large geographical areas it will probably be most ideal to show the relevant simulated data as overlayed \gls{PNG}s. This is easily achieved with image overlays in Leaflet. If we want to show very detailed data when zoomed further in, we might be able to use vector layers to visualize the data. It is also possible to use a GeoJSON layer to convert data formatted as GeoJSON to vectors.
It is also possible to use \gls{WMS} to overlay, for example, meteorological data on a map. Even though this is a format that is used by large organizations like the National Oceanic and Atmospheric Administration (\url{noaa.gov}) we have been advised against using this format due to its negative effect on the speed and responsiveness experienced by the end-user. \footnote{Iván Sánchez Ortega, How to Build Slow Maps - Trondheim, September 24, 2014}
Furthermore Leaflet can be extended with plugins. These can relatively easily be written in JavaScript or an existing plugin can be downloaded and used. A relevant plugin to our needs could for example be heatmap.js (\url{http://www.patrick-wied.at/static/heatmapjs/}).
\paragraph{Summary}
Leaflet is very suitable for our needs in respect to creating an interactive map overlayed with visualizations of relevant data created by the SINTEF ocean forecast simulations. It is lightweight(33 kilobytes), made to be compatible with mobile phones and very flexible in possibilities to display data on maps. It is also well documented.
\subsubsection{OpenLayers}
\begin{tabular}{|p{4cm}|p{8cm}|}
\hline
Home page: & \url{http://openlayers.org/} \\
\hline
Service functionality: & A high-performance, feature-packed library for all your mapping needs. \\
\hline
\end{tabular}
\paragraph{Introduction} \indent
OpenLayers is an \gls{open source} (provided under the \href{'https://tldrlegal.com/license/bsd-2-clause-license-(freebsd)'}{2-clause \gls{BSD} License}) JavaScript library for displaying map data in web browsers. It provides an \gls{API} for building rich web-based geographic applications similar to Google Maps and Bing Maps. The library was originally based on the Prototype JavaScript Framework. Since November 2007 OpenLayers is an Open Source Geospatial Foundation project.
The current stable version, OpenLayers 3.0, has been released August 29, 2014.
\paragraph{Features}
OpenLayers provides support to the following functionalities:
\begin{description}
\item[Tile layers] It pulls tiles from OSM, Bing, MapBox, Stamen, MapQuest, and any other XYZ source you can find. \gls{OGC} mapping services and untiled layers also supported.
\item[Vector layers] Renders vector data from GeoJSON, TopoJSON, KML, GML, and a growing number of other formats.
\item[Fast \& Mobile Ready] Mobile support is out of the box, and it is possible to build lightweight custom profiles with just the needed components.
\item[Cutting Edge \& Easy to Customize] Map rendering leverages WebGL, Canvas 2D, and \gls{HTML5}. Map styling is controlled with straight-forward \gls{CSS}.
\end{description}
\paragraph{Initial load time}
The initial load time of the map is ok on a PC. The total loading time of the example (\gls{HTML} with map and \gls{WMS} tiles) took around 4 seconds. The size of the JavaScript library (ol.js) is 129 KB, and the first time (without any sort of browser caching) it was fetched in 473 ms.
\\ \emph{Score: \textbf{Med}}
\paragraph{Responsiveness}
It is quite fast when you pan, but not so fast when zooming in (438 ms + 2.43 s for DNS lookup of the \gls{WMS} server in the example) or out (around 400 ms was logged, on average, as the time to get tiles from the server, because there is no need of DNS lookup, but the overall perceived time is around 1s).
\\ \emph{Score: \textbf{Med}}
\paragraph{Detail and dynamism}
The detail of the image is defined by \gls{PNG} provided by the \gls{WMS} server. Vector layers are also well drawn. Tile layers are updated on zooming.
\\ \emph{Score: \textbf{High}}
\paragraph{Ease of use}
It is quite easy to implement. The code needed is low, and it is easily convertible into the Leaflet format. There are a lot of provided working examples, but it is not well documented yet for the version 3.0 (that has been released very recently: at the end of August 2014). Anyway, the detailed \gls{API} documentation is provided.
\\ \emph{Score: \textbf{Med}}
\paragraph{Summary}
It is a library used in a lot of applications, dynamic, with a lot of features. On the contrary, it is less lightweight than LeafletJS, not so well documented and not always so fast as experienced by the user, especially in zooming.
\\ \emph{Overall rating: \textbf{Good}}
\subsubsection{Wind map}
\emph{Link: \url{http://hint.fm/wind/}} \\%make sure to use //
Wind map is a personal art project that gets surface wind data from the American government agency National Digital Forecast Database and displays it as moving curved lines on a map. This makes for a very intuitive visualization of the data that gives a good general picture of the actual real-world situation. It is zoomable and can pan. By clicking on a specific point on the map you get the wind speed and coordinates of the point. A draw back might be the use on mobile devices which may be suboptimal.
\\ \emph{Overall rating: \textbf{Good}}
\subsubsection{Comparison of Leaflet and OpenLayers}
For the purpose of creating an interactive map we've boiled it down to either using LeafletJS or OpenLayers. As the description of each of the individual libraries above shows, the two are have many of the same capabilities. Both gives us the possibility to overlay visualizations of data in different ways. An reason for choosing Leaflet is the extensive documentation of the current version, a point where OpenLayers are lacking at the moment. Despite this OpenLayers support of \gls{WMTS} and \gls{WMSC} is such a strong argument in its favor that we judge it as the best fit for our purposes.
\subsection{Godiva2}
\begin{tabular}{|p{4cm}|p{8cm}|}
\hline
Example page: & \url{http://behemoth.nerc-essc.ac.uk/ncWMS/godiva2.html} \\
\hline
Service functionality: & A browser client made to browse data served by a ncWMS server. \\
\hline
\end{tabular}
\paragraph{Introduction}
Godiva2 was created as a companion for ncWMS, to display the \gls{PNG}s served by ncWMS as tiles in a map interface. It is a fairly simple \gls{HTML} page using JavaScript. \\
It presents the user with a panable, zoomable map interface of the earth. Variables (for example ocean temperature) can be selected from a menu. An overlay is added to the map, and the map zooms and pans to show the relevant area. The user can select the date, time of day and depth, updating the map immediately. Clicking on the map brings up a context menu, where a vertical profile plot can be created for that spot. Selecting a tool in the map interface, the user can draw a line on the map, and a plot is created showing the value of the selected variable along that line. There is a menu to select a different \gls{WMS}, changing the background map and projection of any overlays. The site allows the user to grab a permalink of the current state of the map.
\paragraph{Load time}
Before selecting a variable, only a list of available variables and the background map are fetched. Testing Godiva2 on a local ncWMS server the initial load time is low. Tests using publicly available datasets vary, but some have low load time, indicating that the slower ones might just have less available resources.
\\ \emph{Score: \textbf{Good}}
\paragraph{Responsiveness}
The responsiveness of panning and zooming the map, as well as changing the desired depth and timestep varies quite a bit. This is true both for local and remote datasets.
\\ \emph{Score: \textbf{Medium}}
\paragraph{Detail and dynamism}
Every bit of detail available in the dataset is shown, as required by the level of zoom.
\\ \emph{Score: \textbf{High}}
\paragraph{Ease of Use}
The menus for changing variable, depth and time step are all intuitive and easy to use. The map interface follows the conventions for zooming and panning with the mouse.
\\ \emph{Score: \textbf{High}}
\paragraph{Summary}
This is a product made to browse 4 dimensional geospatial data like the ones we want to present. The look and feel is slightly outdated, but the functionality it has works well. It is \gls{open source}. It and the third party libraries are licensed under free software licenses. It (and the ncWMS server) would need to be modified in order to display all the data and charts found in the current solution.
\\ \emph{Overall rating: \textbf{Good}}
\section{Recommendation}
Based on the study of possible solutions in the previous chapter, the group has made the following recommendation to the customer.
\subsection{Front-end}
As the group has looked into the situation at SINTEF, as well as other solutions in production, the conclusion is rather clear. SINTEF requires a very specific \gls{front-end}, with custom graphs, and the ability to add more functionality at a later stage. As far as existing solutions go, only the Godiva2 system has been close to performing the necessary tasks to serve SINTEF's needs. This system is however not very easy to adapt, and the group is of the opinion that changing this system to fit custom needs may be too much work to justify.
With this in mind, the group feels that the only realistic solution for the \gls{front-end} is to develop a custom system from scratch. Specifically we think using the OpenLayers JavaScript library will be beneficial in this project.
\subsection{Back-end}
For the \gls{back-end}, the group is divided in its recommendation. From our investigation into different technologies, it seems like \gls{THREDDS} is a solution that is generally implemented for working with \gls{netcdf} files. Advantages of using this is that there exists documentation, maintenance, and updates. The group has had trouble configuring \gls{THREDDS}, and has also gotten the impression that SINTEF is not perfectly satisfied with its performance.
The group feels confident that a custom \gls{back-end} \gls{prototype} can be made, and that it will be able to meet the requirements. This will give the customer more flexibility in functionality and expandability, but will obviously not be as complete or well supported as \gls{THREDDS}.
The group asks SINTEF to make a decision for a \gls{back-end} solution based on the information in this report.
\subsection{Different paths}
\label{subsec:DifferentPaths}
As SINTEF is given a choice for how to proceed, the group sees two paths forward in the project:
\begin{enumerate}
\item \gls{THREDDS} \gls{back-end}: Given that SINTEF chooses to use a \gls{THREDDS} \gls{back-end}, the group asks SINTEF to provide a configured \gls{THREDDS} server. The group will focus their work on the \gls{front-end}, and attempt to make a \gls{prototype} that replicates most functionality from today's site.
\item Custom \gls{back-end}: Given that SINTEF chooses a custom \gls{back-end}, the team will split into two groups, and attempt to make a \gls{prototype} \gls{back-end}, as well as a \gls{prototype} \gls{front-end}. In this scenario, both the front- and \gls{back-end} will have limited capabilities, but will demonstrate what is possible with a fully custom solution.
\end{enumerate}
\chapter{Requirements}
\label{chap:Requirements}
This chapter presents the functional and non-functional requirements for the project, as well as system backlog priorities and estimates.
\section{Use Cases}
The desired behavior of the system can be described through use case diagrams. Here we will display the users and how they want to interact with the system. Use cases model the system requirements and is an easy way to understand which features are needed and how these features should work. This section will outline the general use cases. For more defined specifications we refer to the requirements specification in section \ref{reqspec}.
\subsection{Planning}
In our project, we have a relatively simple set of users and control of parameters. The main focus of our task is to deliver a full system with working \gls{front-end} and \gls{back-end} modules. In this case the user should be able to use the \gls{front-end} without further knowledge about the \gls{back-end}. The user will interact with a \gls{GUI} through key-input and mouse-input. This will mainly be map-interaction like scrolling or dragging the map, but parameters will also be controlled through buttons and other input fields.
There is also a second type of users that is not as important in our scope, but will have to be considered. This is 3rd-party users or companies that request access the data directly from our \gls{back-end} for use with their own \gls{front-end} solution. These users will send requests over REST with commands from an \gls{API} for allowed requests.
\subsection{Users}
Our product will have two types of users: