HAT She Functions are built using Scala and deployed on the AWS Lambda Platform. The detailed developer's guide to the SMART HAT Engine (aka SHE) can be found here -> https://developers.hubofallthings.com/guides/smart-hat-engine/.
There are 2 projects in this repository - sentiment-tracker and data-feed-counter. These are the default
SHE functions available to the HAT microserver. This README file serves as a Quickstart guide to developing
a SHE function in Scala by explaining the code layout of data-feed-counter. The command scripts used to
deploy the functions are also described.
- sbt
- serverless [https://serverless.com]
- AWS credentials to deploy lambda functions. Remember to configure your serverless with your AWS credentials.
A SHE function must consists of 3 things
- an endpoint / method to return its configuration [https://developers.hubofallthings.com/guides/smart-hat-engine/01-function-information-format.html]
- an endpoint / method to define the
data-bundlespecifying what data it wants to receive, parametrised by the date range (fromDate and untilDate query parameters in ISO8601 format). - an endpoint / method that accepts 1 and 2 above and does the actual computation.
There are 2 ways of invoking a Lambda function on AWS.
- via HTTP endpoints
- via the protocols provided by the AWS SDKs
The format of the inbound request is DIFFERENT depending on the invocation method chosen. HAT executes
lambda functions via SDKs. However it's easier to test with HTTPs endpoints with a client like Postman. Hence
you will see later that both the methods above are coded for in the data-feed-counter (and sentiment-tracker)
SHE function.
We have broken down the project into 3 files, each with a specific purpose.
Looking into the folder function-data-feed-counter/src/main/scala/org/hatdex/hat/she/functions/
- DataFeedCounter.scala
- performs the actual processing
- DataFeedCounterHandler.scala
- handles invocation by AWS SDKs
- DataFeedCounterProxyHandler.scala
- handles invocation via HTTP endpoints
The only difference between 2 and 3 is the way input parameters are processed.
Within the DataFeedCounterHandler.scala, you will see 3 classes defined
- DataFeedCounterConfigurationHandler class (DataFeedCounterHandler.scala:LINE 22) which actually gets the
configurationobject inDataFeedCounter.scala(DataFeedCounter.scala:LINE 21) - DataFeedCounterBundleHandler class (DataFeedCounterHandler.scala:LINE 31) which calls the helper class/method
bundleFilterByDateinDataFeedCounter.scala(DataFeedCounter.scala:LINE 64) to get the data for processing - DataFeedCounterHandler class (DataFeedCounterHandler.scala:LINE 13) which calls the
executemethod inDataFeedCounter.scala(DataFeedCounter.scala:LINE 127) to compute and respond
The HTTP counterpart in DataFeedCounterProxyHandler.scala are
- DataFeedCounterProxyHandler.scala:LINE 22
- DataFeedCounterProxyHandler.scala:LINE 31
- DataFeedCounterProxyHandler.scala:LINE 13
We need to build a fat jar file containing all the required classes and libraries for deployment. Use the command sbt assembly.
Refer to build-artifacts.sh in the root of the repository.
Refer to the serverless.yaml file.
- You need to provide the location of the jar file you have built above. see line 26.
- Then you need to define the lambda endpoints
- Again using
data-feed-counteras example, look at thefunctionssection - Note that
api-data-feed-counter,api-data-feed-counter-configurationandapi-data-feed-counter-bundleall points to their respective classes defined inDataFeedCounterProxyHandler.scala. This defines HTTPs lambda endpoints - Note that
data-feed-counter,data-feed-counter-configurationanddata-feed-counter-bundleall points to their respective classes defined inDataFeedCounterHandler.scala. This is directly invoked by AWS SDKs.
- Again using
- In the
api-*sections, the endpoint paths are defined.- the path is of the format
<function-name>/<version>. The endpoint is terminated with/configurationand/data-bundlefor the Configuration and DataBundle functions respectively. Thefunction-nameandversionname is defined in theconfigurationobject of theDataFeedCounter.scala. See lines 22 and 24.
- the path is of the format
Refer to build-artifacts.sh file. This script builds the jar files and stores them in the artifacts folder.
Internally, users are authenticating with AWS by assuming designated roles in the target accounts. In order for the setup
to work with Serverless framework all the configuration information has to be contained in the ~/.aws/credendials file.
Serverless is not able to pick up profile details from ~/.aws/config file, documentation of the issue
can be found here. As a result, inside the credentials file the following
information has to be included:
[dataswift-environment]
source_profile = dataswift
role_arn = arn:aws:iam::123456789012:role/Operator
mfa_serial = arn:aws:iam::123456789012:mfa/name.surname@dataswift.ioRun the deployment command:
sls deploy --stage stage --region aws_regionCurrently supported stages are: staging, sandbox, production, legacy, direct
Refer to -> https://developers.hubofallthings.com/guides/smart-hat-engine/02-function-testing.html You can use Postman on your endpoints. The Postman collection can be found in the link immediately above.