diff --git a/.hintrc b/.hintrc deleted file mode 100644 index ae62a1a0..00000000 --- a/.hintrc +++ /dev/null @@ -1,13 +0,0 @@ -{ - "extends": [ - "development" - ], - "hints": { - "axe/text-alternatives": [ - "default", - { - "image-alt": "off" - } - ] - } -} \ No newline at end of file diff --git a/LICENSE b/LICENSE deleted file mode 100644 index b749d358..00000000 --- a/LICENSE +++ /dev/null @@ -1,7 +0,0 @@ -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/README.md b/README.md deleted file mode 100644 index fb4249a1..00000000 --- a/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# Byodr - Build Your Own Delivery Robot - -Routine transport of small goods - packages, medicine, inspection equipment - can be done autonomously with simple routes. -An unmanned vehicle is built to be smaller, cheaper and more energy efficient, because there is no driver on board. -Such a vehicle requires teleoperation by an operator and self-driving software. - -This project comprises the set of software services that run this rover and others like it. - -[![](docs/img/readme/rover_front_small.jpg)](https://vimeo.com/461308029 "Type 'Industrial'") - -> We went ahead and designed a rover that can be assembled from **generally available** components on the internet. - -## Links -* Website - [www.mwlc.global](http://www.mwlc.global) -* Documentation - [read the docs](https://byodr.readthedocs.io) -* Youtube - [More Work Less Carbon](https://www.youtube.com/channel/UCcR4AaPJflGaWlBFhHefzpQ) - -## Features -* Browser based teleoperation via internet -* Community driven self-driving models -* Dockerized -* Free for personal use - diff --git a/Services_Documentation.txt b/Services_Documentation.txt deleted file mode 100644 index 75dffcb7..00000000 --- a/Services_Documentation.txt +++ /dev/null @@ -1,197 +0,0 @@ -Service Architecture -watchdog function -Smart Segment -A smart segment is part of the robot that houses computer systems inside that allow it to move autonomously. Connected segments make the robot act like a caterpillar. -This computer system includes a Raspberry Pi, a Jetson Nano, 2 cameras, a router and 2 motor controllers. - -Hardware -1) AI Camera (camera0) -A smart segment uses Hikvision PTZ Dome Network camera for its AI Camera. -IP: 192.168.1.64 --Input: Footage from its surroundings --Output: Sends H264 encoded video ouput to the Pi’s Docker container service called “Stream0”. - -2) Operator Camera (camera1) -The smart segment also uses a 2nd Hikvision PTZ Dome Network camera for an Operator Camera. -IP: 192.168.1.65 --Input: Footage from its surroundings --Output: Sends H264 encoded video output to the Pi’s Docker container service called “Stream1”. - -3) Raspberry Pi 4B -OS: balena-cloud-byodr- pi-raspberrypi4-64-2.99.27-v14.0.8 -IP: 192.168.1.32 -This OS allows it to communicate with Balena Cloud. Inside the Pi, there are 5 processes running, 4 of which run in their own separate Docker containers. - -4) Nvidia Jetson Nano -OS: balena-cloud-byodr-nano-jetson-nano-2.88.4+rev1-v12.11.0 -IP: 192.168.1.100 -This OS allows it to communicate with Balena Cloud. Inside the Nano, there are 10 processes running, all of which run in their own separate Docker containers. - -5) RUT-955 -IP: 192.168.1.1 -The router inside the segment is called RUT955 from Teltonika. The router has LAN, WAN, 4G, 5G and LTE capabilities. It’s ethernet connectivity is extended with a switch. The router is responsible for all internet connectivity between the segment and the rest of the Internet. -This router also includes an internal relay that works as a switch that lets the battery power the rest of the segment. Only when the router is booted up and the relay switch closes, will the segment receive power to the rest of its internal components. - -6) Motor Controller 1 -The segment uses the Mini FSESC6.7 from Flipsky. It is connected via USB to the ttyACM0 serial port of the Pi. --Input: Commands from the Pi. --Output: Sends power to its respective motor wheel in order to turn it according to its commands. - -7) Motor Controller 2 -The segment uses the Mini FSESC6.7 from Flipsky. It is connected via USB to the ttyACM1 serial port of the Pi. --Input: Commands from the Pi. --Output: Sends power to its respective motor wheel in order to turn it according to its commands. - -Software stack -1) Balena -From Balena, we use their Balena Cloud services, and also use BalenaOS on the Raspberry Pi and Jetson Nano, to make them compatible with Balena Cloud. From the Balena Cloud we can upload new versions of software, update segments OTA, reboot, connect via SSH, and manage segments remotely. - -2) Docker -Docker is a platform for building, shipping, and running applications in containers. Containers are lightweight, portable, and self-sufficient units that contain all the necessary software, libraries, and dependencies to run an application. Docker enables developers to package their applications into containers, which can be easily deployed and run on any platform that supports Docker. With Docker, developers can ensure that their applications run consistently across different environments, from development to production. -The BYODR project includes dockerfiles that can be used to build a Docker image for each service as well as instructions on how to deploy the image onto a robot using Balena Cloud. By using this approach, users can ensure that the software stack is consistent and reproducible across multiple robots, and can easily deploy updates and manage their fleet of cars from a central location. - -3) Zerotier -Zerotier is a “freemium” P2P (Peer to Peer) VPN service that allows devices with internet capabilities to securely connect to P2P virtual software-defined networks. -The Pi has a Zerotier instance running inside it. This means that it is equipped to work with a Zerotier client that is running on our devices, so that we can add the Pi to our VPN network. -Similarly to the Pi, the Nano also has the same functionality regarding Zerotier, although arguably more important here, since it allows the User to connect to the Nano, and by extension the Web server, via a secure zerotier network. - -4) Wireguard -Similarly to Zerotier, Wireguard is also a VPN. The difference here is that Wireguard is used by the Nano in every network procedure it has to go through for safety. Since the Nano has plenty more processes that require a network connection, compared to the Pi, Wireguard is an extra layer of security against attacks. This process is running inside a docker container. --Q: Why do we use Wireguard if we have ZT? --A: Since ZeroTier and WireGuard look similar, the project uses both ZeroTier and WireGuard for different purposes. ZeroTier is used to create a secure network connection between the robot and the user's computer, while WireGuard is used to encrypt the data that is transmitted over that connection. Together, these technologies provide a secure and reliable way for users to remotely control the robots. - -Raspberry Pi docker service descriptions: -1) Stream0 --Input: Receives video stream from the AI camera. --Function: Creates a high quality H264 video output stream --Output: Sends the stream via RTSP to the web server located in Teleop. --Q1: Why does the Pi create the streams, and not just send them from the cameras directly to the nano, bypassing the Pi? - - - -2) Stream1 --Input: Receives video stream from the Operator camera. --Purpose: Similarly to the process above, it creates a high quality H264 video output stream. --Output: Sends the stream via RTSP to the web server located in Teleop. --Q1: How does the AI get the images for itself? From the H264 stream, or someplace else? - -3) Zerotier --Input: Receives input from the user, using the built-in command line. --Function: We can add the Pi to our VPN network. --Output: The Pi can communicate with the software-defined virtual networks that the user has built, via the internet. --Q1: Why does the Pi need the zerotier? - -4) Servos --Input: Receives commands in JSON format from Teleop, Inference, Pilot that request movement from the motors. --Function: Sets up a JSON server that listens on 0.0.0.0:5555 for commands from other processes. Listening to 0.0.0.0 means listening from anywhere that has network access to this device. It also sets up a JSON Publisher so that this service can send JSON data to any services that are listening to this service. Decodes commands received from the other services are decoded and stored in a deque. -This service also initiates an http server listening to port 9101 (default option). --Output: The commands are sent to the Motor controllers via the serial USB connection. --Q1: BalenaCloud lists another service called “pigpiod”. This service and the “servos” service both use the same image in their docker container. What does the pigpiod service do? --Q2: Why does this service have a JSON publisher? Who does it send data to? - -5) Battery Management System (BMS) [The only service that does not run in a docker container] --Input: Receives data from the BMS inside the battery itself. --Function: The Pi uses an I2C Pi Hat to communicate with the special battery that the segment uses. From here, the Pi can give a “pulse” to the battery in order to “reset” it. This system also allows for seamless use of 2 or more of the same battery, on the same segment. This process is exclusively hardware based, so it is not running in a container. --Output: Sends data to the BMS inside the battery. - -Jetson Nano docker service descriptions: -1) HTTPD --Input: Listens for data requests from Teleop, Pilot, Stream1, Stream0. The sources are listed in the configuration file (called haproxy.conf) that the proxy server uses. --Function: This service sets up a proxy server (Intermediate server between the client and the actual HTTP server) using HAProxy, for load balancing and request forwarding. --Output: Forwards requests to the same services as above, but taking into account load balancing. The destinations are listed in the configuration files that the proxy server uses. - -2) Inference --Input 1: Receives stream from AI camera with the socket url being 'ipc:///byodr/camera_0.sock' --Input 2: Receives routes from teleop with the socket url being 'ipc:///byodr/teleop.sock' --Input 3: Receives timestamps from teleop with the socket url being 'ipc:///byodr/teleop_c.sock' --Function: This service is responsible for an interface for generating steering angles and making predictions based on images. These actions are based on a trained neural network model. If this service has input from Teleop, they override the self-driving directions of the model. -This service also initiates an IPC server with url 'ipc:///byodr/inference_c.sock', and a JSON publisher with url 'ipc:///byodr/inference.sock' --Output: Sends data to the Servos service for proper motor control, but I cannot spot where and how this happens. --Q1: How does Inference, Pilot and Teleop work together, if they work together? --Q2: How does Inference send its data to the Pi for proper movement? - -3) Zerotier --Input: Receives input from the user, using the buildin command line. --Function: The Nano can be added into a virtual network. --Output: Can communicate securely with nodes of the same network. - -4) WireGuard --Input: Receives data from the Nano and the Router. --Function: Encrypts the data of the Nano. --Output: The data send by the Nano towards the internet are encrypted. --Q: Why do we use Wireguard if we have ZT? --A: Since ZeroTier and WireGuard look similar, the project uses both ZeroTier and WireGuard for different purposes. ZeroTier is used to create a secure network connection between the robot and the user's computer, while WireGuard is used to encrypt the data that is transmitted over that connection. Together, these technologies provide a secure and reliable way for users to remotely control the robots. - -5) Teleop --Input 1: Receives stream from Stream0 service of the Pi --Input 2: Receives stream from Stream1 service of the Pi --Input 3: Receives data in a JSON format from the Pilot service --Input 4: Receives data in a JSON format from the Vehicle service --Input 5: Receives data in a JSON format from the Inference service --Input 6: Receives input from the Operator’s method of control --Function: This service includes a web server that listens for inputs from multiple sources that are later used to move the robot. The key presses from the operator are registered and reflected upon the robot using this service. -This service includes a logger that logs information regarding the manual control of the robot. -In addition, there is a function in this server that encodes the streams from the cameras to MJPEG. -It also hosts the site design files necessary to draw the Web App. --Output 1: Robot movement according to user’s commands --Output 2: Live video feed on the web app --Output 3: MJPEG stream capability --Output 4: Logs and messages produced during operation are stored MongoDB. --Q1: How does “Teleop” translate user input into robot movement? --Q2: How does it communicate with the cameras, “Pilot”, “Inference” and “Vehicle”? --Q3: What data does it receive and send to the Pilot, Vehicle and Inference services? --Q4: From where does it receive its navigation images? --Q5: What does it do with the images? - -6) Vehicle --Input 1: Receives data in a JSON format from the Pilot service --Input 2: Receives data in a JSON format from the Teleop service --Function: This process sets up a server that connects to a CARLA simulator and communicates with it to control a self-driving car. Carla is an open-source simulator for autonomous driving research. It is used to simulate the robot’s behavior in a virtual environment. --Output 1: Sends the data to a server running an instance of CARLA. The data sent will properly represent a segment inside the simulation. --Output 2: Sends the data to a server running an instance of CARLA. The data sent will properly represent a segment inside the simulation. --Q1: Is this process exclusively used to send data to the CARLA simulator, and nothing else regarding the driving of the robot? --Q2: Where is the CARLA simulation hosted? --Q3: What do the video streams created in the server do exactly? - -7) ROS Node --Input 1: Receives data in a JSON format from the Pilot service --Input 2: Receives data in a JSON format from the Teleop service --Function: This service defines a ROS2 node which connects to a teleop node and a pilot node, and switches the driving mode to Autopilot or Manual, depending on user input. It also sets a max speed for the segment. --Output: Sends ROS commands in JSON format to the Pilot service --Q1: Why exactly do we need this service? --Q2: Does the communication with other services imply the existence of multiple nodes? --Q3: Why does it publish json data only to the Pilot, and not to both Pilot and Teleop? - -8) Pilot --Input 1: Receives data in a JSON format from the Teleop service --Input 2: Receives data in a JSON format from the Rosnode service --Input 3: Receives data in a JSON format from the Vehicle service --Input 4: Receives data in a JSON format from the Inference service --Input 5: Receives IPC chatter in a JSON format from the Teleop service --Function: This process sets up a JSON publisher and a local IPC server to send data to other services that have JSON collectors. It also is responsible for controlling the segment’s autonomous movement by using a pre-trained AI model. --Output: Sends JSON commands to the Servos service to enable the robot to drive autonomously. --Q1: This process is exclusively used by the robot for its autonomous driving? --Q2: How does this service cooperate with “Inference”? --Q3: Why does this service start an HTTP server? --Q4: What is an IPC chatter json receiver? --Q5: What is a local ipc server? (It uses _c in its name => c = chatter?) - -9) MongoDB --Input: Receives data from the segment, and stores any and all logs produced by the other services (?) --Function: This service creates a default MongoDB user and starts a configurable MongoDB server on the local machine. --Output: Stores logs in its builtin database --Q1: What does the DB store inside it? --Q2: How does the DB get the data that it stores? - -10) FTPD -Input: Receives the newly trained model from the training server. --Function: This service creates a Pure-FTPd Server with a predefined set of commands. This server is used to send its training data to the server, and similarly, receive the trained model from the server. --Output: Sends data to the AI training server with parameters for its specific training. --Q1: Is this the code that connects the Nano to the Firezilla FTP server? (Mentioned in the readthedocs) --Q2: How does this ftp server store, send and receive data from the training server? - - -General questions -Q1: How json receivers and publishers work? Because this is used to send data one to another. -Q2: If all segments are in a zerotier network, does any data sent between the segments encrypted? - diff --git a/archived/arduino/xmaxx_arduino_190/xmaxx_arduino_190.ino b/archived/arduino/xmaxx_arduino_190/xmaxx_arduino_190.ino deleted file mode 100644 index 1406ba11..00000000 --- a/archived/arduino/xmaxx_arduino_190/xmaxx_arduino_190.ino +++ /dev/null @@ -1,125 +0,0 @@ -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#define HALL_IN_PIN A1 -#define STEERING_OUT_PIN 5 -#define THROTTLE_OUT_PIN 4 - -// For odometry. -#define H_RPS_MOMENT 0.20 - -volatile uint16_t throttle_zero_shift; - -volatile byte h_up; -volatile float h_val, h_rps; -volatile uint32_t h_detect_time, h_publish_time; - -// Time the last command was received on the throttle channel. -volatile uint32_t lastCmdReceivedTime; - -ros::NodeHandle nodeHandle; -geometry_msgs::TwistStamped message; - -// Servo objects generate the signals expected by the Electronic Speed Controller (ESC) and steering servo. -Servo servoThrottle; -Servo servoSteering; - -void msgDrive(const geometry_msgs::Twist& msg) { - throttle_zero_shift = (uint16_t) msg.linear.x; - uint16_t throttle = throttle_zero_shift + (uint16_t) msg.linear.y; - uint16_t angle = (uint16_t) msg.angular.x + (uint16_t) msg.angular.y; - lastCmdReceivedTime = micros(); - - // Apply ceilings for forward and reverse throttle as a safety measure. - if (throttle > 140) { - throttle = 140; - } else if (throttle < 40) { - throttle = 40; - } - servoThrottle.write(throttle); - // Protect the steering margins. - if (angle > 180) { - angle = 180; - } else if (angle < 0) { - angle = 0; - } - if (servoSteering.read() != angle) { - servoSteering.write(angle); - } -} - -ros::Subscriber subscriber("/roy_teleop/command/drive", &msgDrive); - -ros::Publisher publisher("/roy_teleop/sensor/odometer", &message); - -void publish_odometer() { - message.header.stamp = nodeHandle.now(); - message.twist.linear.x = h_up; - message.twist.linear.y = h_rps; - message.twist.linear.z = analogRead(HALL_IN_PIN); - publisher.publish(&message); -} - -void hall_detect() { - if (digitalRead(HALL_IN_PIN) == HIGH) { - h_up++; - if (h_up >= 2) { - h_val = 1000000.0 / (micros() - h_detect_time); - h_rps = H_RPS_MOMENT * h_val + (1.0 - H_RPS_MOMENT) * h_rps; - h_detect_time = micros(); - h_up = 0; - } - } -} - -void setup() { - Serial.begin(57600); - - h_up = 0; - h_val = 0; - h_rps = 0; - h_detect_time = 0; - h_publish_time = 0; - - throttle_zero_shift = 0; - lastCmdReceivedTime = 0; - - pinMode(HALL_IN_PIN, INPUT_PULLUP); - - servoSteering.attach(STEERING_OUT_PIN); - servoThrottle.attach(THROTTLE_OUT_PIN); - - PCintPort::attachInterrupt(HALL_IN_PIN, hall_detect, CHANGE); - - nodeHandle.initNode(); - nodeHandle.subscribe(subscriber); - nodeHandle.advertise(publisher); -} - -void loop() { - // Stop the vehicle when command communication slows down or stops functioning. - if (micros() - lastCmdReceivedTime > 100000) { - servoThrottle.write(90 + throttle_zero_shift); - } - - // Process the odometer. - // Drop to zero when stopped. - if (micros() - h_detect_time > 500000) { - h_rps = (1.0 - H_RPS_MOMENT) * h_rps; - } - - // Avoid flooding the topic. - if (micros() - h_publish_time > 50000) { - publish_odometer(); - h_publish_time = micros(); - } - - nodeHandle.spinOnce(); -} diff --git a/archived/arduino/xmaxx_arduino_rc/TimedAction/TimedAction.cpp b/archived/arduino/xmaxx_arduino_rc/TimedAction/TimedAction.cpp deleted file mode 100644 index fc5ecfac..00000000 --- a/archived/arduino/xmaxx_arduino_rc/TimedAction/TimedAction.cpp +++ /dev/null @@ -1,44 +0,0 @@ -#include "TimedAction.h" - -/* -|| <> -*/ -TimedAction::TimedAction(unsigned long intervl,void (*function)()){ - active = true; - previous = 0; - interval = intervl; - execute = function; -} - -/* -|| <> -*/ -TimedAction::TimedAction(unsigned long prev,unsigned long intervl,void (*function)()){ - active = true; - previous = prev; - interval = intervl; - execute = function; -} - -void TimedAction::reset(){ - previous = millis(); -} - -void TimedAction::disable(){ - active = false; -} - -void TimedAction::enable(){ - active = true; -} - -void TimedAction::check(){ - if ( active && (millis()-previous >= interval) ) { - previous = millis(); - execute(); - } -} - -void TimedAction::setInterval( unsigned long intervl){ - interval = intervl; -} \ No newline at end of file diff --git a/archived/arduino/xmaxx_arduino_rc/TimedAction/TimedAction.h b/archived/arduino/xmaxx_arduino_rc/TimedAction/TimedAction.h deleted file mode 100644 index 553b4f6c..00000000 --- a/archived/arduino/xmaxx_arduino_rc/TimedAction/TimedAction.h +++ /dev/null @@ -1,70 +0,0 @@ -/* -|| -|| @file TimedAction.cpp -|| @version 1.6 -|| @author Alexander Brevig -|| @contact alexanderbrevig@gmail.com -|| -|| @description -|| | Provide an easy way of triggering functions at a set interval -|| # -|| -|| @license -|| | This library is free software; you can redistribute it and/or -|| | modify it under the terms of the GNU Lesser General Public -|| | License as published by the Free Software Foundation; version -|| | 2.1 of the License. -|| | -|| | This library is distributed in the hope that it will be useful, -|| | but WITHOUT ANY WARRANTY; without even the implied warranty of -|| | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -|| | Lesser General Public License for more details. -|| | -|| | You should have received a copy of the GNU Lesser General Public -|| | License along with this library; if not, write to the Free Software -|| | Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA -|| # -|| -*/ - -#ifndef TIMEDACTION_H -#define TIMEDACTION_H - -#include "Arduino.h" - -#define NO_PREDELAY 0 - -class TimedAction { - - public: - TimedAction(unsigned long interval,void (*function)()); - TimedAction(unsigned long prev,unsigned long interval,void (*function)()); - - void reset(); - void disable(); - void enable(); - void check(); - - void setInterval( unsigned long interval ); - - private: - bool active; - unsigned long previous; - unsigned long interval; - void (*execute)(); - -}; - -#endif - -/* -|| @changelog -|| | 1.6 2010-10-08 - Alexander Brevig : Changed datatype of interval from unsigned int to unsigned long -|| | 1.5 2009-10-25 - Alexander Brevig : Added setInterval , requested by: Boris Neumann -|| | 1.4 2009-05-06 - Alexander Brevig : Added reset() -|| | 1.3 2009-04-16 - Alexander Brevig : Added disable() and enable(), requested by: http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?action=viewprofile;username=ryno -|| | 1.2 2009-04-13 - Alexander Brevig : Added a constructor -|| | 1.1 2009-04-08 - Alexander Brevig : Added an example that demonstrates three arduino examples at once -|| | 1.0 2009-03-23 - Alexander Brevig : Initial Release -|| # -*/ diff --git a/archived/arduino/xmaxx_arduino_rc/TimedAction/keywords.txt b/archived/arduino/xmaxx_arduino_rc/TimedAction/keywords.txt deleted file mode 100644 index 5bb7b763..00000000 --- a/archived/arduino/xmaxx_arduino_rc/TimedAction/keywords.txt +++ /dev/null @@ -1,8 +0,0 @@ -TimedAction KEYWORD1 - -reset KEYWORD2 -enable KEYWORD2 -disable KEYWORD2 -check KEYWORD2 - -NO_PREDELAY LITERAL1 diff --git a/archived/arduino/xmaxx_arduino_rc/xmaxx_arduino_rc.ino b/archived/arduino/xmaxx_arduino_rc/xmaxx_arduino_rc.ino deleted file mode 100644 index 85f3aa6b..00000000 --- a/archived/arduino/xmaxx_arduino_rc/xmaxx_arduino_rc.ino +++ /dev/null @@ -1,340 +0,0 @@ -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -// Assign your channel in pins. -#define THROTTLE_IN_PIN A1 -#define STEERING_IN_PIN A0 - -// Assign your channel out pins. -#define THROTTLE_OUT_PIN 6 -#define STEERING_OUT_PIN 5 - -#define analogLow 988 -#define analogHigh 1988 -#define analogCenter 1488 -#define analogCenterLow 1458 -#define analogCenterHigh 1518 - -// Time to wait each loop. -#define SPIN_DELAY 2 - -// Single logging function timer. -#define TELEMETRY_ACTION_DELAY 30 - -// In keyboard control mode set throttle to zero on message receiver timeout. -#define CMD_RECEIVED_TIMEOUT 500 - -// These bit flags are set in bUpdateFlagsShared to indicate which -// channels have new signals. -#define THROTTLE_FLAG 1 -#define STEERING_FLAG 2 - -// Time the last command was received on the throttle channel. -volatile uint32_t lastCmdReceivedTime; - -// holds the update flags defined above -volatile uint8_t bUpdateFlagsShared; - -// shared variables are updated by the ISR and read by loop. -// In loop we immediatley take local copies so that the ISR can keep ownership of the -// shared ones. To access these in loop -// we first turn interrupts off with noInterrupts -// we take a copy to use in loop and the turn interrupts back on -// as quickly as possible, this ensures that we are always able to receive new signals -volatile uint16_t unThrottleInShared; -volatile uint16_t unSteeringInShared; - -// These are used to record the rising edge of a pulse in the calcInput functions -// They do not need to be volatile as they are only used in the ISR. If we wanted -// to refer to these in loop and the ISR then they would need to be declared volatile -uint32_t ulThrottleStart; -uint32_t ulSteeringStart; - -// control source {1: human-radio, 2: human-keyboard, 3: computer} -// Read the values from the pins in radio control mode only. -volatile uint8_t control_mode; - -ros::NodeHandle nodeHandle; -geometry_msgs::TwistStamped message; - -// Servo objects generate the signals expected by Electronic Speed Controllers and Servos -// We will use the objects to output the signals we read in -// this example code provides a straight pass through of the signal with no custom processing -Servo servoThrottle; -Servo servoSteering; - -TimedAction cmdLogAction = TimedAction(TELEMETRY_ACTION_DELAY, publish_telemetry); - -/* - The interrupt service routine attached to the trottle pin. -*/ -void calcThrottle() { - if (control_mode == 1) { - // if the pin is high, its a rising edge of the signal pulse, so lets record its value - if (digitalRead(THROTTLE_IN_PIN) == HIGH) { - ulThrottleStart = micros(); - } - else { - // else it must be a falling edge, so lets get the time and subtract the time of the rising edge - // this gives use the time between the rising and falling edges i.e. the pulse duration. - unThrottleInShared = (uint16_t)(micros() - ulThrottleStart); - // use set the throttle flag to indicate that a new throttle signal has been received - bUpdateFlagsShared |= THROTTLE_FLAG; - } - } -} - -/* - The interrupt service routine attached to the steering pin. -*/ -void calcSteering() { - if (control_mode == 1) { - if (digitalRead(STEERING_IN_PIN) == HIGH) { - ulSteeringStart = micros(); - } - else { - unSteeringInShared = (uint16_t)(micros() - ulSteeringStart); - bUpdateFlagsShared |= STEERING_FLAG; - } - } -} - -ros::Publisher publisher("/roy_teleop/cmd_logging", &message); - -/* - 2 Channels in keyboard mode separate throttle from steering. - angular.x = channel {0: none, 1: throttle, 2: steering, 3: both} - angular.y = _ - angular.z = steering - linear.x = control source {0: use last, 1: human-radio, 2: human-keyboard, 3: computer} - linear.y = _ - linear.z = throttle -*/ -void publish_telemetry() { - publish_teleop(servoSteering.readMicroseconds(), servoThrottle.readMicroseconds(), (int) control_mode); -} - -void publish_teleop(int angle, int throttle, int ctl_source) { - message.header.stamp = nodeHandle.now(); - message.twist.angular.z = angle; - message.twist.linear.x = ctl_source; - message.twist.linear.z = throttle; - publisher.publish(&message); -} - -/* - Switch off throttle on command receiver timeout when not in radio mode. -*/ -void check_throttle() { - if (control_mode == 1) { - return; - } - - if ((micros() - lastCmdReceivedTime) / 1000 > CMD_RECEIVED_TIMEOUT) { - if (servoThrottle.readMicroseconds() != analogCenter) { - servoThrottle.writeMicroseconds(analogCenter); - } - } -} - -/* - Drive the servos by radio control. -*/ -void rcDrive() { - if (control_mode != 1) { - return; - } - - // create local variables to hold a local copies of the channel inputs - // these are declared static so that thier values will be retained - // between calls to loop. - static uint16_t unThrottleIn; - static uint16_t unSteeringIn; - - // local copy of update flags - static uint8_t bUpdateFlags; - - // check shared update flags to see if any channels have a new signal - if (bUpdateFlagsShared) { - // turn interrupts off quickly while we take local copies of the shared variables - noInterrupts(); - - // take a local copy of which channels were updated in case we need to use this in the rest of loop - bUpdateFlags = bUpdateFlagsShared; - - // in the current code, the shared values are always populated - // so we could copy them without testing the flags - // however in the future this could change, so lets - // only copy when the flags tell us we can. - if (bUpdateFlags & THROTTLE_FLAG) { - unThrottleIn = unThrottleInShared; - } - - if (bUpdateFlags & STEERING_FLAG) { - unSteeringIn = unSteeringInShared; - } - - // clear shared copy of updated flags as we have already taken the updates - // we still have a local copy if we need to use it in bUpdateFlags - bUpdateFlagsShared = 0; - - interrupts(); // we have local copies of the inputs, so now we can turn interrupts back on - // as soon as interrupts are back on, we can no longer use the shared copies, the interrupt - // service routines own these and could update them at any time. During the update, the - // shared copies may contain junk. Luckily we have our local copies to work with :-) - } - - // do any processing from here onwards - // only use the local values unAuxIn, unThrottleIn and unSteeringIn, the shared - // variables unAuxInShared, unThrottleInShared, unSteeringInShared are always owned by - // the interrupt routines and should not be used in loop - - // the following code provides simple pass through - // this is a good initial test, the Arduino will pass through - // receiver input as if the Arduino is not there. - // This should be used to confirm the circuit and power - // before attempting any custom processing in a project. - - // we are checking to see if the channel value has changed, this is indicated - // by the flags. For the simple pass through we don't really need this check, - // but for a more complex project where a new signal requires significant processing - // this allows us to only calculate new values when we have new inputs, rather than - // on every cycle. - - if (bUpdateFlags & THROTTLE_FLAG) { - if (servoThrottle.readMicroseconds() != unThrottleIn) { - servoThrottle.writeMicroseconds(unThrottleIn); - } - } - - if (bUpdateFlags & STEERING_FLAG) { - if (unSteeringIn > analogCenterLow && unSteeringIn < analogCenterHigh) { - centerSteering(); - } - else if (servoSteering.readMicroseconds() != unSteeringIn) { - servoSteering.writeMicroseconds(unSteeringIn); - } - } - - bUpdateFlags = 0; -} - -/* - Ros command topic listener function. - Drives the servos with the command values. - (use-cases: - input toggle control = switch joystick on or off - input drive = throttle and steering - ) - - 2 Channels in keyboard mode separate throttle from steering. - angular.x = channel {0: none, 1: throttle, 2: steering, 3: both} - angular.y = _ - angular.z = steering - linear.x = control source {0: use last, 1: human-radio, 2: human-keyboard, 3: computer} - linear.y = _ - linear.z = throttle -*/ -void msgDrive(const geometry_msgs::Twist& msg) { - // Use the current control setting unless change is required. - int _control = (int) msg.linear.x; - if (_control > 0) { - control_mode = _control; - } - - // Abort cmd processing when on radio. - if (control_mode == 1) { - return; - } - - uint16_t channel = (uint16_t) msg.angular.x; - uint16_t angle = (uint16_t) msg.angular.z; - uint16_t throttle = (uint16_t) msg.linear.z; - - // Drive the servos depending on which channel is active. - if (channel == 1 || channel == 3) { - lastCmdReceivedTime = micros(); - if (servoThrottle.readMicroseconds() != throttle) { - // Protect against malformed cmd values. - if (throttle > analogHigh) { - throttle = analogHigh; - } else if (throttle < analogLow) { - throttle = analogLow; - } - servoThrottle.writeMicroseconds(throttle); - } - } - - if (channel == 2 || channel == 3) { - if (servoSteering.readMicroseconds() != angle) { - // Protect against malformed cmd values. - if (angle > analogHigh) { - angle = analogHigh; - } else if (angle < analogLow) { - angle = analogLow; - } - servoSteering.writeMicroseconds(angle); - } - } -} - -ros::Subscriber subscriber("/roy_teleop/cmd_vel", &msgDrive); - -void centerSteering() { - servoSteering.writeMicroseconds(analogCenter); -} - -void setup() { - Serial.begin(57600); - - servoSteering.attach(STEERING_OUT_PIN); - servoThrottle.attach(THROTTLE_OUT_PIN); - - PCintPort::attachInterrupt(STEERING_IN_PIN, calcSteering, CHANGE); - PCintPort::attachInterrupt(THROTTLE_IN_PIN, calcThrottle, CHANGE); - - centerSteering(); - delay(300); - - control_mode = 1; - - nodeHandle.initNode(); - nodeHandle.subscribe(subscriber); - nodeHandle.advertise(publisher); -} - -void loop() { - rcDrive(); - nodeHandle.spinOnce(); - check_throttle(); - cmdLogAction.check(); - delay(SPIN_DELAY); -} - - - - - - - - - - - - - - - - - - - - diff --git a/archived/docker/docker-compose.dev.yml b/archived/docker/docker-compose.dev.yml deleted file mode 100644 index 1d57aed6..00000000 --- a/archived/docker/docker-compose.dev.yml +++ /dev/null @@ -1,32 +0,0 @@ -version: '2' -services: - zerotier: - image: rwgrim/docker-noop:latest - relay: - volumes: - - ./common:/common - - ./vehicles/rover:/app - cam1: - volumes: - - ./common:/common - - ./vehicles/rover:/app - vehicle: - volumes: - - ./common:/common - - ./vehicles/rover:/app - teleop: - volumes: - - ./common:/common - - ./teleop:/app - pilot: - volumes: - - ./common:/common - - ./pilot:/app - recorder: - volumes: - - ./common:/common - - ./recorder:/app - inference: - volumes: - - ./common:/common - - ./inference:/app diff --git a/archived/docker/docker-compose.pytest.yml b/archived/docker/docker-compose.pytest.yml deleted file mode 100644 index 5b42a463..00000000 --- a/archived/docker/docker-compose.pytest.yml +++ /dev/null @@ -1,17 +0,0 @@ -version: '2' -services: - teleop: - user: root - volumes: - - ../build/pytest_cache:/pytest_cache - - ../common:/common - - ../teleop:/app - pilot: - user: root - volumes: - - ../pilot:/app - recorder: - user: root - volumes: - - ../recorder:/app - diff --git a/archived/docker/docker-compose.tegra.dev.yml b/archived/docker/docker-compose.tegra.dev.yml deleted file mode 100644 index 57b66370..00000000 --- a/archived/docker/docker-compose.tegra.dev.yml +++ /dev/null @@ -1,31 +0,0 @@ -version: '2' -services: - cam1: - build: - dockerfile: vehicles/rover/Dockerfile - volumes: - - ../common:/common - - ../vehicles/rover:/app - nodews: - build: - dockerfile: teleop/nodews/Dockerfile - teleop: - volumes: - - ../common:/common - - ../teleop:/app - pilot: - volumes: - - ../pilot:/app - recorder: - volumes: - - ../recorder:/app - vehicle: - build: - dockerfile: vehicles/rover/Dockerfile - volumes: - - ../vehicles/rover:/app - inference: - build: - dockerfile: inference/Dockerfile.jp - volumes: - - ../inference:/app diff --git a/archived/docker/docker-compose.tegra.yml b/archived/docker/docker-compose.tegra.yml deleted file mode 100644 index db0b3c67..00000000 --- a/archived/docker/docker-compose.tegra.yml +++ /dev/null @@ -1,62 +0,0 @@ -version: '2' -services: - cam1: - build: - context: .. - image: centipede2donald/byodr-ce:rover - container_name: byodr_cam1 - restart: always - command: python stream.py - stop_signal: SIGKILL - volumes: - - ${DC_CONFIG_DIR}:/config - nodews: - build: - context: .. - image: centipede2donald/byodr-ce:nodews - container_name: byodr_nodews - restart: always - stop_signal: SIGKILL - depends_on: - - "cam1" - ports: - - "9101:9101" - volumes: - - ${DC_CONFIG_DIR}:/config - teleop: - restart: always - environment: - LD_PRELOAD: libgomp.so.1 - pilot: - restart: always - environment: - LD_PRELOAD: libgomp.so.1 - recorder: - restart: always - environment: - LD_PRELOAD: libgomp.so.1 - vehicle: - image: centipede2donald/byodr-ce:rover - # Cannot read the usbrelay device symlink in docker-compose. - privileged: true - network_mode: "host" - restart: always - environment: - LD_PRELOAD: libgomp.so.1 - inference: - image: centipede2donald/byodr-ce:inference-tegra - restart: always - environment: - LD_PRELOAD: libgomp.so.1 - volumes: - - '/usr/local/cuda:/usr/local/cuda' - - '/usr/lib/aarch64-linux-gnu:/usr-extra' - devices: - - '/dev/nvhost-ctrl' - - '/dev/nvhost-ctrl-gpu' - - '/dev/nvhost-prof-gpu' - - '/dev/nvmap' - - '/dev/nvhost-gpu' - - '/dev/nvhost-as-gpu' - - diff --git a/archived/docker/docker-compose.yml b/archived/docker/docker-compose.yml deleted file mode 100644 index 9f831368..00000000 --- a/archived/docker/docker-compose.yml +++ /dev/null @@ -1,53 +0,0 @@ -version: '2' -services: - teleop: - build: - context: .. - dockerfile: teleop/Dockerfile - image: centipede2donald/byodr-ce:teleop - container_name: byodr_teleop - ports: - - "80:9100" - volumes: - - ${DC_CONFIG_DIR}:/config - pilot: - build: - context: .. - dockerfile: pilot/Dockerfile - image: centipede2donald/byodr-ce:pilot - container_name: byodr_pilot - volumes: - - ${DC_CONFIG_DIR}:/config - volumes_from: - - teleop:rw - recorder: - build: - context: .. - dockerfile: recorder/Dockerfile - image: centipede2donald/byodr-ce:recorder - container_name: byodr_recorder - volumes: - - ${DC_CONFIG_DIR}:/config - - ${DC_RECORDER_SESSIONS}:/sessions - volumes_from: - - teleop:rw - vehicle: - build: - context: .. - image: rwgrim/docker-noop:latest - container_name: byodr_vehicle - volumes: - - ${DC_CONFIG_DIR}:/config - volumes_from: - - teleop:rw - inference: - build: - context: .. - image: rwgrim/docker-noop:latest - container_name: byodr_inference - volumes: - - ${DC_CONFIG_DIR}:/config - volumes_from: - - teleop:rw - - diff --git a/archived/docker/jp43-nano-cp36-base.dockerfile b/archived/docker/jp43-nano-cp36-base.dockerfile deleted file mode 100644 index 832bc9fe..00000000 --- a/archived/docker/jp43-nano-cp36-base.dockerfile +++ /dev/null @@ -1,29 +0,0 @@ -# Sourced from https://github.com/balena-io-examples/jetson-nano-x11/blob/master/nano_32_3_1/Dockerfile.template -# docker build -f jp43-nano-cp36-base.dockerfile -t centipede2donald/nvidia-jetson:jp43-nano-cp36-base . -FROM balenalib/jetson-nano-ubuntu:bionic - -# Prevent apt-get prompting for input -ENV DEBIAN_FRONTEND noninteractive - -# Download and install BSP binaries for L4T 32.3.1 -RUN apt-get update && apt-get install -y wget tar lbzip2 python3 libegl1 && \ - wget https://developer.nvidia.com/embedded/dlc/r32-3-1_Release_v1.0/t210ref_release_aarch64/Tegra210_Linux_R32.3.1_aarch64.tbz2 && \ - tar xf Tegra210_Linux_R32.3.1_aarch64.tbz2 && \ - cd Linux_for_Tegra && \ - sed -i 's/config.tbz2/config.tbz2 --exclude=etc\/hosts --exclude=etc\/hostname/g' apply_binaries.sh && \ - sed -i 's/install --owner=root --group=root \"${QEMU_BIN}\" \"${L4T_ROOTFS_DIR}\/usr\/bin\/\"/#install --owner=root --group=root \"${QEMU_BIN}\" \"${L4T_ROOTFS_DIR}\/usr\/bin\/\"/g' nv_tegra/nv-apply-debs.sh && \ - sed -i 's/LC_ALL=C chroot . mount -t proc none \/proc/ /g' nv_tegra/nv-apply-debs.sh && \ - sed -i 's/umount ${L4T_ROOTFS_DIR}\/proc/ /g' nv_tegra/nv-apply-debs.sh && \ - sed -i 's/chroot . \// /g' nv_tegra/nv-apply-debs.sh && \ - ./apply_binaries.sh -r / --target-overlay && cd .. \ - rm -rf Tegra210_Linux_R32.3.1_aarch64.tbz2 && \ - rm -rf Linux_for_Tegra && \ - echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && ldconfig - -# From https://github.com/BouweCeunen/computer-vision-jetson-nano/blob/master/dockers/l4t/Dockerfile -ARG POWER_MODE=0000 -RUN ln -s /etc/nvpmodel/nvpmodel_t210_jetson-nano.conf /etc/nvpmodel.conf && \ - ln -s /etc/systemd/system/nvpmodel.service /etc/systemd/system/multi-user.target.wants/nvpmodel.service && \ - mkdir /var/lib/nvpmodel && \ - echo "/etc/nvpmodel.conf" > /var/lib/nvpmodel/conf_file_path && \ - echo "pmode:${POWER_MODE} fmode:fanNull" > /var/lib/nvpmodel/status \ No newline at end of file diff --git a/archived/docker/jp43-nano-cp36-cuda.dockerfile b/archived/docker/jp43-nano-cp36-cuda.dockerfile deleted file mode 100644 index bab047cf..00000000 --- a/archived/docker/jp43-nano-cp36-cuda.dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -# docker build -f jp43-nano-cp36-cuda.dockerfile -t centipede2donald/nvidia-jetson:jp43-nano-cp36-cuda . -FROM centipede2donald/nvidia-jetson:jp43-nano-cp36-base - -WORKDIR /app - -# Download with the nvidia sdk manager -COPY /cuda-repo-l4t-10-0*.deb . -COPY /libcudnn7_7*.deb . - -ENV DEBIAN_FRONTEND noninteractive - -RUN dpkg -i cuda-repo-l4t-10-0-local-10.0.326_1.0-1_arm64.deb && \ - apt-key add /var/cuda-repo-10-0-local-10.0.326/*.pub && \ - apt-get update && \ - apt-get install -y cuda-toolkit-10-0 && \ - apt-get install -y --no-install-recommends ./libcudnn7_7.6.3.28-1+cuda10.0_arm64.deb && \ - rm -rf *.deb && \ - dpkg --remove cuda-repo-l4t-10-0-local-10.0.326 && \ - dpkg -P cuda-repo-l4t-10-0-local-10.0.326 && \ - echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && \ - ldconfig && \ - rm -rf /usr/local/cuda-10.0/doc &&\ - apt-get -y clean && \ - rm -rf /var/lib/apt/lists/* - -WORKDIR / \ No newline at end of file diff --git a/archived/docker/jp43-nano-cp36-tf115-opencv440.dockerfile b/archived/docker/jp43-nano-cp36-tf115-opencv440.dockerfile deleted file mode 100644 index 130abaec..00000000 --- a/archived/docker/jp43-nano-cp36-tf115-opencv440.dockerfile +++ /dev/null @@ -1,27 +0,0 @@ -# docker build -f jp43-nano-cp36-tf115-opencv440.dockerfile -t centipede2donald/nvidia-jetson:jp43-nano-cp36-tensorflow-115-opencv-440 . -FROM centipede2donald/nvidia-jetson:jp43-nano-cp36-tensorflow-115 - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install -y --no-install-recommends \ - python3-zmq \ - libgl1-mesa-glx \ - && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -RUN mkdir /wheelhouse -ENV PIP_WHEEL_DIR=/wheelhouse -ENV WHEELHOUSE=/wheelhouse -ENV PIP_FIND_LINKS=/wheelhouse - -RUN python3 -m pip install "opencv-python >=4.4.0.42,<4.5.0" && \ - python3 -m pip install "numpy<1.19.0,>=1.16.0" && \ - python3 -m pip install "scipy>=1.4.1,<1.5" && \ - python3 -m pip install "jsoncomment==0.3.3" && \ - python3 -m pip install "Equation==1.2.1" && \ - python3 -m pip install "pytest==4.6.11" && \ - rm -rf /root/.cache - -#RUN ls -1 -d /wheelhouse/*.whl | xargs python3 -m pip install --no-cache-dir && rm -rf /wheelhouse -RUN rm -rf /wheelhouse diff --git a/archived/docker/jp43-nano-cp36-tf115.dockerfile b/archived/docker/jp43-nano-cp36-tf115.dockerfile deleted file mode 100644 index a307a9a0..00000000 --- a/archived/docker/jp43-nano-cp36-tf115.dockerfile +++ /dev/null @@ -1,47 +0,0 @@ -# docker build -f jp43-nano-cp36-tf115.dockerfile -t centipede2donald/nvidia-jetson:jp43-nano-cp36-tensorflow-115 . -FROM centipede2donald/nvidia-jetson:jp43-nano-cp36-cuda - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install -y --no-install-recommends \ - make automake gcc g++ \ - pkg-config \ - libhdf5-serial-dev \ - hdf5-tools \ - libhdf5-dev \ - build-essential \ - python3-dev \ - python3-pip \ - python3-setuptools \ - python3-wheel \ - python3-h5py \ - zlib1g-dev zip \ - libjpeg8-dev \ - liblapack-dev \ - libblas-dev \ - gfortran \ - wget && \ - apt-get -y clean && \ - rm -rf /var/lib/apt/lists/* && \ - rm -rf *.tbz2 - -RUN python3 -m pip install --no-cache-dir --upgrade pip && \ - python3 -m pip install --no-cache-dir --upgrade setuptools && \ - python3 -m pip install --no-cache-dir --upgrade wheel - -ENV LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64 -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-10.0/targets/aarch64-linux/lib -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib/aarch64-linux-gnu -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib/aarch64-linux-gnu/tegra - -RUN ln -s /usr/include/locale.h /usr/include/xlocale.h - -RUN pip3 install -U numpy grpcio absl-py py-cpuinfo psutil portpicker six mock requests gast h5py astor termcolor protobuf \ - keras-applications keras-preprocessing wrapt google-pasta - -# From https://developer.download.nvidia.com/compute/redist/jp -COPY /tensorflow_gpu-1.15* . - -RUN pip3 install tensorflow_gpu-1.15.0+nv20.1-cp36-cp36m-linux_aarch64.whl && \ - rm -rf tensorflow*.whl && \ - rm -rf /root/.cache diff --git a/archived/docker/tf-jp42.dockerfile b/archived/docker/tf-jp42.dockerfile deleted file mode 100644 index 7d98ba2d..00000000 --- a/archived/docker/tf-jp42.dockerfile +++ /dev/null @@ -1,28 +0,0 @@ -# docker build -f -t centipede2donald/nvidia-jetson:jp42-python27-opencv32-tensorflow113 . -FROM centipede2donald/ubuntu-bionic:python27-opencv32 - -ENV TZ=Europe/Amsterdam -RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone - -RUN apt-get update && apt-get install -y --no-install-recommends \ - libhdf5-serial-dev \ - hdf5-tools \ - libhdf5-dev \ - zlib1g-dev zip \ - libjpeg8-dev \ - liblapack-dev \ - libblas-dev \ - gfortran \ - wget \ - && rm -rf /var/lib/apt/lists/* - -# Origin https://developer.download.nvidia.com/compute/redist/jp/v411/tensorflow-gpu -COPY /tensorflow_gpu-1.13.0* / - -ENV HDF5_DIR=/usr/lib/aarch64-linux-gnu/hdf5/serial - -RUN pip install /tensorflow_gpu-1.13.0rc0+nv19.2-cp27-cp27mu-linux_aarch64.whl - -RUN pip install --upgrade numpy - -WORKDIR / diff --git a/archived/docker/tf-jp44.dockerfile b/archived/docker/tf-jp44.dockerfile deleted file mode 100644 index e4d0d7c0..00000000 --- a/archived/docker/tf-jp44.dockerfile +++ /dev/null @@ -1,15 +0,0 @@ -# docker build -f -t centipede2donald/nvidia-jetson:jp44-r32.4.3-tf1.15-py3 . -FROM nvcr.io/nvidia/l4t-tensorflow:r32.4.3-tf1.15-py3 - -ENV TZ=Europe/Amsterdam -RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone - -RUN apt-get update && apt-get install -y --no-install-recommends \ - python3-opencv \ - python3-pandas \ - python3-pip \ - python3-zmq \ - wget \ - && rm -rf /var/lib/apt/lists/* - -WORKDIR / diff --git a/archived/docker/tf114-jp333.dockerfile b/archived/docker/tf114-jp333.dockerfile deleted file mode 100644 index 9c0dbe43..00000000 --- a/archived/docker/tf114-jp333.dockerfile +++ /dev/null @@ -1,74 +0,0 @@ -# docker build -f tf114-jp333.dockerfile -t centipede2donald/nvidia-jetson:jp333-cp27-tf114-1 . -FROM balenalib/jetson-nano-ubuntu:bionic as buildstep - -WORKDIR /app - -# Download with the nvidia sdk manager -COPY /cuda-repo-*.deb . -COPY /libcudnn7_*.deb . - -ENV DEBIAN_FRONTEND noninteractive - -RUN dpkg -i cuda-repo-l4t-9-0-local_9.0.252-1_arm64.deb && \ - apt-key add /var/cuda-repo-9-0-local/*.pub && \ - apt-get update && \ - apt-get install -y cuda-toolkit-9-0 ./libcudnn7_7.1.5.14-1+cuda9.0_arm64.deb && \ - rm -rf *.deb && \ - dpkg --remove cuda-repo-l4t-9-0-local && \ - dpkg -P cuda-repo-l4t-9-0-local && \ - echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && \ - ldconfig - -RUN rm -rf /usr/local/cuda-9.0/doc - -FROM balenalib/jetson-nano-ubuntu:bionic as final - -COPY --from=buildstep /usr/local/cuda-9.0 /usr/local/cuda-9.0 -COPY --from=buildstep /usr/lib/aarch64-linux-gnu /usr/lib/aarch64-linux-gnu -COPY --from=buildstep /usr/local/lib /usr/local/lib - -WORKDIR /app - -COPY /nvidia_drivers.tbz2 . -COPY /config.tbz2 . - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install lbzip2 -y && \ - tar xjf nvidia_drivers.tbz2 -C / && \ - tar xjf config.tbz2 -C / --exclude=etc/hosts --exclude=etc/hostname && \ - echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && ldconfig && \ - apt-get install -y --no-install-recommends \ - libhdf5-serial-dev \ - hdf5-tools \ - libhdf5-dev \ - build-essential \ - python-dev \ - python-opencv \ - python-pandas \ - python-pip \ - python-setuptools \ - python-wheel \ - python-zmq \ - python-h5py \ - zlib1g-dev zip \ - libjpeg8-dev \ - liblapack-dev \ - libblas-dev \ - gfortran \ - wget && \ - apt-get -y clean && \ - rm -rf /var/lib/apt/lists/* && \ - rm -rf *.tbz2 - -RUN pip install "jsoncomment==0.3.3" && \ - pip install "pytest==4.6.11" - -# From https://developer.download.nvidia.com/compute/redist/jp/ -COPY /tensorflow_gpu-1.14.0* . - -RUN pip install tensorflow_gpu-1.14.0+nv19.9-cp27-cp27mu-linux_aarch64.whl && \ - rm -rf tensorflow*.whl && \ - rm -rf /root/.cache - -WORKDIR / \ No newline at end of file diff --git a/archived/docker/tf115-cp36-jp44.dockerfile b/archived/docker/tf115-cp36-jp44.dockerfile deleted file mode 100644 index ef179ade..00000000 --- a/archived/docker/tf115-cp36-jp44.dockerfile +++ /dev/null @@ -1,102 +0,0 @@ -# docker build -f tf115-cp36-jp44.dockerfile -t centipede2donald/nvidia-jetson:nano-jp44-tf-1.15.4-nv20.11-cp36-opencv-4.4 . -FROM balenalib/jetson-nano-ubuntu:bionic as buildstep - -WORKDIR /app - -# Download with the nvidia sdk manager -COPY /cuda-repo-l4t-10-2*.deb . -COPY /libcudnn8_*.deb . - -ENV DEBIAN_FRONTEND noninteractive - -RUN dpkg -i cuda-repo-l4t-10-2-local-10.2.89_1.0-1_arm64.deb && \ - apt-key add /var/cuda-repo-10-2-local-10.2.89/*.pub && \ - apt-get update && \ - apt-get install -y cuda-toolkit-10-2 ./libcudnn8_8.0.0.180-1+cuda10.2_arm64.deb && \ - rm -rf *.deb && \ - dpkg --remove cuda-repo-l4t-10-2-local-10.2.89 && \ - dpkg -P cuda-repo-l4t-10-2-local-10.2.89 && \ - echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && \ - ldconfig - -RUN rm -rf /usr/local/cuda-10.2/doc - -FROM balenalib/jetson-nano-ubuntu:bionic as final - -COPY --from=buildstep /usr/local/cuda-10.2 /usr/local/cuda-10.2 -COPY --from=buildstep /usr/lib/aarch64-linux-gnu /usr/lib/aarch64-linux-gnu -COPY --from=buildstep /usr/local/lib /usr/local/lib - -WORKDIR /app - -COPY /nvidia_drivers.tbz2 . -COPY /config.tbz2 . - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install lbzip2 -y && \ - tar xjf nvidia_drivers.tbz2 -C / && \ - tar xjf config.tbz2 -C / --exclude=etc/hosts --exclude=etc/hostname && \ - echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && ldconfig && \ - rm -rf *.tbz2 - -RUN apt-get update && apt-get install -y --no-install-recommends \ - make automake gcc g++ \ - build-essential \ - python3-dev \ - python3-pip \ - python3-setuptools \ - libhdf5-dev \ - libhdf5-serial-dev \ - python3-h5py \ - gfortran \ - hdf5-tools \ - libblas-dev \ - liblapack-dev \ - pkg-config \ - python3-zmq \ - libgl1-mesa-glx \ - && \ - python3 -m pip install --no-cache-dir --upgrade pip && \ - python3 -m pip install --no-cache-dir --upgrade setuptools && \ - python3 -m pip install --no-cache-dir --upgrade wheel && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -RUN python3 -m pip install "absl-py>=0.7.0" && \ - python3 -m pip install "astor>=0.6.0" && \ - python3 -m pip install "gast==0.2.2" && \ - python3 -m pip install "google-pasta>=0.1.6" && \ - python3 -m pip install "keras-applications>=1.0.8" && \ - python3 -m pip install "keras-preprocessing>=1.0.5" && \ - python3 -m pip install "protobuf>=3.6.1" && \ - python3 -m pip install "numpy<1.19.0,>=1.16.0" && \ - python3 -m pip install "opt-einsum>=2.3.2" && \ - python3 -m pip install "six>=1.10.0" && \ - python3 -m pip install "tensorboard<1.16.0,>=1.15.0" && \ - python3 -m pip install "tensorflow-estimator==1.15.1" && \ - python3 -m pip install "termcolor>=1.1.0" && \ - python3 -m pip install "wrapt>=1.11.1" && \ - python3 -m pip install "grpcio>=1.8.6" - -RUN ln -s /usr/include/locale.h /usr/include/xlocale.h - -RUN python3 -m pip wheel --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow==1.15.4+nv20.11 - -RUN pip3 install "opencv-python >=4.4.0.42,<4.5.0" && \ - pip3 install "numpy<1.19.0,>=1.16.0" && \ - pip3 install "scipy>=1.4.1,<1.5" && \ - pip3 install "jsoncomment==0.3.3" && \ - pip3 install "Equation==1.2.1" && \ - pip3 install "pytest==4.6.11" - -RUN ls -1 -d *.whl | xargs python3 -m pip install --no-cache-dir - -RUN rm -rf /root/.cache && rm -rf *.whl - -ENV LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64 -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-10.2/targets/aarch64-linux/lib -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib/aarch64-linux-gnu -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib/aarch64-linux-gnu/tegra - -WORKDIR / \ No newline at end of file diff --git a/archived/docker/tf115-jp422-trt516.dockerfile b/archived/docker/tf115-jp422-trt516.dockerfile deleted file mode 100644 index 478e4ffc..00000000 --- a/archived/docker/tf115-jp422-trt516.dockerfile +++ /dev/null @@ -1,55 +0,0 @@ -# docker build -f tf115-jp422-trt516.dockerfile -t centipede2donald/nvidia-jetson:jp422-cp36-tf115-trt5.1.6-1 . -FROM bouwe/jetson-nano-l4t-cuda-cudnn-nvinfer-tensorrt-opencv:latest - -WORKDIR /app - -ENV DEBIAN_FRONTEND=noninteractive -ARG HDF5_DIR="/usr/lib/aarch64-linux-gnu/hdf5/serial/" -ARG MAKEFLAGS=-j6 - -RUN apt-get update && \ - apt-get install -y --no-install-recommends \ - python3-pip \ - python3-dev \ - gfortran \ - build-essential \ - liblapack-dev \ - libblas-dev \ - libhdf5-serial-dev \ - hdf5-tools \ - libhdf5-dev \ - zlib1g-dev \ - zip \ - libjpeg8-dev \ - && rm -rf /var/lib/apt/lists/* - -RUN pip3 install setuptools Cython wheel && \ - pip3 install numpy --verbose && \ - pip3 install h5py==2.10.0 --verbose && \ - pip3 install future==0.17.1 \ - mock==3.0.5 \ - keras_preprocessing==1.0.5 \ - keras_applications==1.0.8 \ - gast==0.2.2 \ - futures \ - protobuf \ - pybind11 --verbose - -COPY /tensorflow_gpu-1.15* . - -# https://developer.download.nvidia.com/compute/redist/jp/v42/tensorflow-gpu/ -RUN pip3 install tensorflow_gpu-1.15.0+nv19.11-cp36-cp36m-linux_aarch64.whl && \ - rm -rf tensorflow*.whl && \ - rm -rf /root/.cache - -RUN apt-get update && \ - apt-get install -y --no-install-recommends \ - python3-zmq \ - python3-pandas \ - && rm -rf /var/lib/apt/lists/* - -RUN pip3 install "jsoncomment==0.3.3" - -WORKDIR / - -ENTRYPOINT [ "/bin/sh", "-c" ] \ No newline at end of file diff --git a/archived/docker/tf115-jp423.dockerfile b/archived/docker/tf115-jp423.dockerfile deleted file mode 100644 index d9ba4902..00000000 --- a/archived/docker/tf115-jp423.dockerfile +++ /dev/null @@ -1,81 +0,0 @@ -# docker build -f tf115-jp423.dockerfile -t centipede2donald/nvidia-jetson:jp423-cp36-tf115-trt5.1.6 . -FROM balenalib/jetson-nano-ubuntu:bionic as buildstep - -WORKDIR /app - -# Download with the nvidia sdk manager -COPY /*.deb . - -ENV DEBIAN_FRONTEND noninteractive - -RUN dpkg -i cuda-repo-l4t-10-0-local-10.0.326_1.0-1_arm64.deb && \ - apt-key add /var/cuda-repo-10-0-local-10.0.326/*.pub && apt-get update && \ - apt-get install -y cuda-toolkit-10-0 ./libcudnn7_7.5.0.56-1+cuda10.0_arm64.deb && \ - rm -rf *.deb && \ - dpkg --remove cuda-repo-l4t-10-0-local-10.0.326 && \ - dpkg -P cuda-repo-l4t-10-0-local-10.0.326 && \ - echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && \ - ldconfig - -# to be done for tensorrt: -# dpkg -i libnvinfer5_5.1.6-1+cuda10.0_arm64.deb -# dpkg -i libcudnn7-dev_7.5.0.56-1+cuda10.0_arm64.deb -# dpkg -i libnvinfer-dev_5.1.6-1+cuda10.0_arm64.deb -# dpkg -i graphsurgeon-tf_5.1.6-1+cuda10.0_arm64.deb -# dpkg -i uff-converter-tf_5.1.6-1+cuda10.0_arm64.deb -# dpkg -i libnvinfer-samples_5.1.6-1+cuda10.0_all.deb -# dpkg -i tensorrt_5.1.6.1-1+cuda10.0_arm64.deb - -RUN rm -rf /usr/local/cuda-10.0/doc - -FROM balenalib/jetson-nano-ubuntu:bionic as final - -COPY --from=buildstep /usr/local/cuda-10.0 /usr/local/cuda-10.0 -COPY --from=buildstep /usr/lib/aarch64-linux-gnu /usr/lib/aarch64-linux-gnu -COPY --from=buildstep /usr/local/lib /usr/local/lib - -WORKDIR /app - -COPY /nvidia_drivers.tbz2 . -COPY /config.tbz2 . - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install lbzip2 -y && \ - tar xjf nvidia_drivers.tbz2 -C / && \ - tar xjf config.tbz2 -C / --exclude=etc/hosts --exclude=etc/hostname && \ - echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && ldconfig && \ - apt-get install -y --no-install-recommends \ - python3-pip - wget && \ - apt-get -y clean && \ - rm -rf /var/lib/apt/lists/* && \ - rm -rf *.tbz2 - -# to be done for tensorrt: -# dpkg -i python3-libnvinfer_5.1.6-1+cuda10.0_arm64.deb -# dpkg -i python3-libnvinfer-dev_5.1.6-1+cuda10.0_arm64.deb - -# From https://developer.download.nvidia.com/compute/redist/jp/v411/tensorflow-gpu -COPY /tensorflow_gpu-1.15.0* . - -# apt-get install -y --no-install-recommends libhdf5-serial-dev hdf5-tools libhdf5-dev python3-h5py - -RUN pip3 install tensorflow_gpu-1.15.0+nv19.11-cp36-cp36m-linux_aarch64.whl && \ - rm -rf tensorflow*.whl - -# to be done -# pip uninstall tensorflow_estimator -# pip install -Iv tensorflow_estimator==1.13.0 - -RUN pip install "jsoncomment==0.3.3" && \ - pip3 install "numpy==1.16.6" && \ - pip install "pytest==4.6.11" && \ - rm -rf /root/.cache - -ENV LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64 -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-10.0/targets/aarch64-linux/lib -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib/aarch64-linux-gnu -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib/aarch64-linux-gnu/tegra - -WORKDIR / \ No newline at end of file diff --git a/archived/docker/tf115-jp44.dockerfile b/archived/docker/tf115-jp44.dockerfile deleted file mode 100644 index dcc9c48b..00000000 --- a/archived/docker/tf115-jp44.dockerfile +++ /dev/null @@ -1,45 +0,0 @@ -# docker build -f tf115-jp44.dockerfile -t centipede2donald/nvidia-jetson:jp44-cp36-tf115-2 . -FROM nvcr.io/nvidia/l4t-base:r32.4.3 - -ENV DEBIAN_FRONTEND=noninteractive -ARG HDF5_DIR="/usr/lib/aarch64-linux-gnu/hdf5/serial/" -ARG MAKEFLAGS=-j6 - -RUN apt-get update && \ - apt-get install -y --no-install-recommends \ - python3-pip \ - python3-dev \ - gfortran \ - build-essential \ - liblapack-dev \ - libblas-dev \ - libhdf5-serial-dev \ - hdf5-tools \ - libhdf5-dev \ - zlib1g-dev \ - zip \ - libjpeg8-dev \ - && rm -rf /var/lib/apt/lists/* - -RUN pip3 install setuptools Cython wheel && \ - pip3 install numpy --verbose && \ - pip3 install h5py==2.10.0 --verbose && \ - pip3 install future==0.17.1 \ - mock==3.0.5 \ - keras_preprocessing==1.0.5 \ - keras_applications==1.0.8 \ - gast==0.2.2 \ - futures \ - protobuf \ - pybind11 --verbose - -# https://developer.download.nvidia.com/compute/redist/jp/v44/tensorflow/ -COPY /tensorflow-1.15* . - -RUN pip3 install tensorflow-1.15.3+nv20.8-cp36-cp36m-linux_aarch64.whl && \ - rm -rf tensorflow*.whl && \ - -ENV PATH="/usr/local/cuda/bin:${PATH}" -ENV LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}" - -WORKDIR / diff --git a/archived/docker/tmp_tf115-cp36-jp44.dockerfile b/archived/docker/tmp_tf115-cp36-jp44.dockerfile deleted file mode 100644 index f93ada41..00000000 --- a/archived/docker/tmp_tf115-cp36-jp44.dockerfile +++ /dev/null @@ -1,28 +0,0 @@ -# docker build -f tf115-cp36-jp44.dockerfile -t centipede2donald/l4t:32.4.3-nano-jetpack-4.4-tf-1.15.4-nv20.11-cp36-tensorrt-opencv-4.4 . -FROM centipede2donald/l4t:32.4.3-nano-jetpack-4.4-tf-1.15.4-nv20.11-cp36-tensorrt - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install -y --no-install-recommends \ - make automake gcc g++ \ - build-essential \ - gfortran \ - python3-pip \ - python3-setuptools \ - python3-zmq \ - libblas-dev \ - liblapack-dev \ - pkg-config \ - python3-dev \ - libgl1-mesa-glx \ - && apt-get -y clean && rm -rf /var/lib/apt/lists/* - - -RUN pip3 install "opencv-python >=4.4.0.42,<4.5.0" && \ - pip3 install "numpy<1.19.0,>=1.16.0" && \ - pip3 install "scipy>=1.4.1,<1.5" && \ - pip3 install "jsoncomment==0.3.3" && \ - pip3 install "Equation==1.2.1" && \ - pip3 install "pytest==4.6.11" - -WORKDIR / \ No newline at end of file diff --git a/archived/docker/ubuntu-opencv.dockerfile b/archived/docker/ubuntu-opencv.dockerfile deleted file mode 100644 index 560e6937..00000000 --- a/archived/docker/ubuntu-opencv.dockerfile +++ /dev/null @@ -1,16 +0,0 @@ -# docker build -f -t centipede2donald/ubuntu-bionic:python27-opencv32 . -# docker buildx build --platform linux/arm64,linux/amd64 --push -f -t centipede2donald/ubuntu-bionic:python27-opencv32 . -FROM ubuntu:bionic - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install -y --no-install-recommends \ - build-essential \ - python-dev \ - python-opencv \ - python-pandas \ - python-pip \ - python-setuptools \ - python-wheel \ - python-zmq \ - && rm -rf /var/lib/apt/lists/* diff --git a/archived/docker/z5_jp441-nano-cp36-trt.dockerfile b/archived/docker/z5_jp441-nano-cp36-trt.dockerfile deleted file mode 100644 index 8e8762bd..00000000 --- a/archived/docker/z5_jp441-nano-cp36-trt.dockerfile +++ /dev/null @@ -1,139 +0,0 @@ -# https://github.com/balena-io-playground/jetson-nano-sample-new/blob/master/CUDA/Dockerfile -# docker build -f jp441-nano-cp36-trt.dockerfile -t centipede2donald/nvidia-jetson:jp441-nano-cp36-trt . -FROM balenalib/jetson-nano-ubuntu:bionic - -WORKDIR / - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && \ - apt-get install -y cuda-toolkit-10-2 cuda-samples-10-2 libcudnn8 tensorrt python3-libnvinfer lbzip2 xorg wget tar python3 libegl1 && \ - rm -rf /var/lib/apt/lists/* && \ - rm -rf /usr/local/cuda-10.2/doc - -# Download and install BSP binaries for L4T 32.4.4 (JetPack 4.4.1) -RUN apt-get update && apt-get install -y wget tar lbzip2 python3 libegl1 && \ - wget https://developer.nvidia.com/embedded/L4T/r32_Release_v4.4/r32_Release_v4.4-GMC3/T210/Tegra210_Linux_R32.4.4_aarch64.tbz2 && \ - tar xf Tegra210_Linux_R32.4.4_aarch64.tbz2 && \ - cd Linux_for_Tegra && \ - sed -i 's/config.tbz2\"/config.tbz2\" --exclude=etc\/hosts --exclude=etc\/hostname/g' apply_binaries.sh && \ - sed -i 's/install --owner=root --group=root \"${QEMU_BIN}\" \"${L4T_ROOTFS_DIR}\/usr\/bin\/\"/#install --owner=root --group=root \"${QEMU_BIN}\" \"${L4T_ROOTFS_DIR}\/usr\/bin\/\"/g' nv_tegra/nv-apply-debs.sh && \ - sed -i 's/LC_ALL=C chroot . mount -t proc none \/proc/ /g' nv_tegra/nv-apply-debs.sh && \ - sed -i 's/umount ${L4T_ROOTFS_DIR}\/proc/ /g' nv_tegra/nv-apply-debs.sh && \ - sed -i 's/chroot . \// /g' nv_tegra/nv-apply-debs.sh && \ - ./apply_binaries.sh -r / --target-overlay && cd .. \ - rm -rf Tegra210_Linux_R32.4.4_aarch64.tbz2 && \ - rm -rf Linux_for_Tegra && \ - echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && ldconfig - -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64 -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/targets/aarch64-linux/lib -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib/aarch64-linux-gnu -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib/aarch64-linux-gnu/tegra -ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/extras/CUPTI/lib64 - -ENV PATH="/usr/local/cuda/bin:${PATH}" - -RUN apt-get install -y locales python3 python3-pip git usbutils nano sudo - -# change the locale from POSIX to UTF-8 -RUN locale-gen en_US en_US.UTF-8 && update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 -ENV LANG=en_US.UTF-8 - -# ---------------------------------------------------------- -# OpenCV 4.1.1 -# ---------------------------------------------------------- -RUN apt-key adv --fetch-key https://repo.download.nvidia.com/jetson/jetson-ota-public.asc - -RUN echo "deb https://repo.download.nvidia.com/jetson/common r32.4 main" > /etc/apt/sources.list.d/nvidia-l4t-apt-source.list && \ - apt-get update && \ - apt-get install -y --no-install-recommends \ - libopencv-python && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -RUN pip3 install setuptools Cython wheel -RUN pip3 install numpy==1.19.5 -RUN pip3 install "MarkupSafe >=0.9.2, <2" -RUN pip3 install pycuda - -RUN apt-get update && apt-get install -y --no-install-recommends \ - make automake gcc g++ cmake \ - libprotobuf-dev protobuf-compiler \ - liblapack-dev \ - libblas-dev \ - gfortran \ - python3-zmq \ - && apt-get -y clean && rm -rf /var/lib/apt/lists/* - -RUN pip3 install "onnx==1.9.0" - -RUN wget https://nvidia.box.com/shared/static/ukszbm1iklzymrt54mgxbzjfzunq7i9t.whl -O onnxruntime_gpu-1.7.0-cp36-cp36m-linux_aarch64.whl && \ - pip3 install onnxruntime_gpu-1.7.0-cp36-cp36m-linux_aarch64.whl && \ - rm -rf onnxruntime_gpu-1.7.0*.whl - -RUN ln -s /usr/bin/python3 /usr/bin/python - -RUN apt-get update && \ - ldconfig && \ - apt-get install -y --no-install-recommends \ - python3-pip \ - python3-dev \ - libopenblas-dev \ - libopenmpi2 \ - openmpi-bin \ - openmpi-common \ - gfortran && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -# ---------------------------------------------------------- -# PyTorch (for JetPack 4.4 DP) -# ---------------------------------------------------------- -# Wheels: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-8-0-now-available/72048 -# -ARG PYTORCH_URL=https://nvidia.box.com/shared/static/cs3xn3td6sfgtene6jdvsxlr366m2dhq.whl -ARG PYTORCH_WHL=torch-1.7.0-cp36-cp36m-linux_aarch64.whl - -RUN wget --quiet --show-progress --progress=bar:force:noscroll --no-check-certificate ${PYTORCH_URL} -O ${PYTORCH_WHL} && \ - pip3 install ${PYTORCH_WHL} --verbose && \ - rm ${PYTORCH_WHL} - -# ---------------------------------------------------------- -# torchvision -# ---------------------------------------------------------- -RUN apt-get update && \ - apt-get install -y --no-install-recommends \ - git \ - build-essential \ - libjpeg-dev \ - zlib1g-dev && \ - apt-get -y clean && \ - rm -rf /var/lib/apt/lists/* - -ARG TORCHVISION_VERSION=v0.7.0 -RUN git clone -b ${TORCHVISION_VERSION} https://github.com/pytorch/vision torchvision && \ - cd torchvision && \ - python3 setup.py install && \ - cd ../ && \ - rm -rf torchvision - -RUN wget https://github.com/Kitware/CMake/releases/download/v3.20.2/cmake-3.20.2-linux-aarch64.tar.gz && \ - tar -zxvf cmake-3.20.2-linux-aarch64.tar.gz && \ - rm cmake-3.20.2-linux-aarch64.tar.gz - -ENV PATH="/cmake-3.20.2-linux-aarch64/bin:${PATH}" - -RUN git clone --recursive --branch 7.1 https://github.com/onnx/onnx-tensorrt.git && \ - cd onnx-tensorrt && \ - mkdir build && cd build && \ - cmake .. -DTENSORRT_ROOT=/usr/src/tensorrt && make -j8 && \ - make install && ldconfig - -RUN python3 -m pip install "scipy==1.5.2" && \ - python3 -m pip install "jsoncomment==0.3.3" && \ - python3 -m pip install "Equation==1.2.1" && \ - python3 -m pip install "pytest==6.1.2" && \ - rm -rf /root/.cache - -ENV PYTHONPATH "${PYTHONPATH}:/onnx-tensorrt" diff --git a/archived/exr/Dockerfile b/archived/exr/Dockerfile deleted file mode 100644 index 31859739..00000000 --- a/archived/exr/Dockerfile +++ /dev/null @@ -1,39 +0,0 @@ -FROM centipede2donald/ros-melodic:python27-opencv32-gstreamer10 - -# -# ueye -# on the host disable the usb daemon: sudo update-rc.d ueyeusbdrc disable -# in the container the ueyeusbdrc must be script started since containers do not use initd nor systemd. -# use privilegd, or capabilities -# container mount volume /dev/ueye:/dev/ueye -# -# pcan -# use privilegd, or capabilities -# container mount pcan devices as volumes and provide access to the compiled libpcan* via LD_LIBRARY_PATH -# - -RUN apt-get update && apt-get install -y --no-install-recommends \ - usbutils \ - udev \ - libpopt-dev \ - net-tools \ - iproute2 \ - kmod \ - && rm -rf /var/lib/apt/lists/* - -COPY ./common common/ -COPY ./docker/deps deps/ -COPY ./vehicles/exr app/ - -WORKDIR /app -RUN /bin/cp exr_entry.sh /exr_entry.sh \ - && /bin/bash -c "chmod +x /exr_entry.sh" - -RUN pip install --no-cache-dir -r requirements.txt - -RUN /deps/ueye_4.92.0.0_arm64.run --auto \ - && ueyesetup -i usb - -RUN useradd --uid 1000 --gid 1000 --system --create-home --shell /bin/bash app - -ENTRYPOINT ["/exr_entry.sh"] diff --git a/archived/exr/app.py b/archived/exr/app.py deleted file mode 100644 index 6eb17d95..00000000 --- a/archived/exr/app.py +++ /dev/null @@ -1,297 +0,0 @@ -import argparse -import collections -import glob -import logging -import math -import multiprocessing -import os -import signal -import struct -import threading -import time -from ConfigParser import SafeConfigParser -from functools import partial - -import can -import cv2 -import numpy as np -from can import CanError -from pyueye import ueye - -from byodr.utils import timestamp -from byodr.utils.ipc import ReceiverThread, JSONPublisher, ImagePublisher -from camera import Camera, FrameThread - -logger = logging.getLogger(__name__) -log_format = "%(levelname)s: %(filename)s %(funcName)s %(message)s" - -quit_event = multiprocessing.Event() - -# Shape to pull of each camera. -CAMERA_SHAPE = (240, 320, 3) - -PEAK_CAN_BIT_RATE = 100000 -CAN_BUS_HZ = 20 - -signal.signal(signal.SIGINT, lambda sig, frame: _interrupt()) -signal.signal(signal.SIGTERM, lambda sig, frame: _interrupt()) - - -def _interrupt(): - logger.info("Received interrupt, quitting.") - quit_event.set() - - -def init_drives(bus): - """ - Settings from 2018 RUVU Robotics Rein Appeldoorn / Rokus Ottervanger - - can0 TX - - 100 [8] 00 00 00 00 00 00 00 3F '.......?' - can0 TX - - 130 [8] 00 00 00 00 06 40 00 00 '.....@..' - can0 TX - - 120 [8] 64 00 00 00 06 40 00 00 'd....@..' - can0 TX - - 101 [8] 00 00 00 00 00 00 00 3F '.......?' - can0 TX - - 131 [8] 00 00 00 00 06 40 00 00 '.....@..' - can0 TX - - 121 [8] 64 00 00 00 06 40 00 00 'd....@..' - """ - # id=0x130,0x131 Bandwidth control last 4 bytes IQ20 [0; 100] - # Left drive. - bus.send(can.Message(check=True, arbitration_id=0x100, is_extended_id=False, data=([0, 0, 0, 0, 0, 0, 0, 0x3F]))) - bus.send(can.Message(check=True, arbitration_id=0x130, is_extended_id=False, data=([0, 0, 0, 0, 6, 0x40, 0, 0]))) - # Right drive. - bus.send(can.Message(check=True, arbitration_id=0x101, is_extended_id=False, data=([0, 0, 0, 0, 0, 0, 0, 0x3F]))) - bus.send(can.Message(check=True, arbitration_id=0x131, is_extended_id=False, data=([0, 0, 0, 0, 6, 0x40, 0, 0]))) - - -def param_drives(bus): - # id=0x120,0x121 Max acceleration IQ24 [0; 120] and jerk IQ20 [0; 750]. - # IQ24: struct.pack('>i', int(100 * math.pow(2, 24))).encode('hex') - # IQ20: struct.pack('>i', int(300 * math.pow(2, 20))).encode('hex') - bus.send(can.Message(check=True, arbitration_id=0x120, is_extended_id=False, data=([0x64, 0, 0, 0, 0x06, 0x40, 0, 0]))) - bus.send(can.Message(check=True, arbitration_id=0x121, is_extended_id=False, data=([0x64, 0, 0, 0, 0x06, 0x40, 0, 0]))) - - -def tanh(x, scale): - return (1 - math.exp(-scale * x)) / (1 + math.exp(-scale * x)) - - -def drive_values(steering=0.0, throttle=0.0, scale=1.0, is_teleop=True): - # Disable tank drive or turn on spot unless under direct control. - _minimum = -1.0 if is_teleop else 0.0 - left = max(_minimum, min(1.0, throttle + tanh(steering, scale=scale))) - right = max(_minimum, min(1.0, throttle - tanh(steering, scale=scale))) - return left, right - - -def drive_bytes(x): - """ - IQ24 use half-power = [-2; 2]. - """ - x = 2 * max(-1, min(1, x)) - return struct.pack(">i", int(x * math.pow(2, 24))) - - -def m_speed_ref(steering, throttle, scale, is_teleop): - left, right = drive_values(steering, throttle, scale, is_teleop) - _data = drive_bytes(left) + drive_bytes(-right) - m = can.Message(check=True, arbitration_id=0x111, is_extended_id=False, data=_data) - return m - - -class NoneBus(object): - def __init__(self): - pass - - @staticmethod - def send(m): - # logger.debug(m) - pass - - @staticmethod - def shutdown(): - pass - - -class CanBusThread(threading.Thread): - def __init__(self, bus, event, scale=1.0, frequency=CAN_BUS_HZ): - super(CanBusThread, self).__init__() - self._bus_name = bus - self._ms = 1.0 / frequency - self._quit_event = event - self._scale = scale - self._queue = collections.deque(maxlen=1) - self._bus = NoneBus() - self.reset() - - def _create_bus(self): - name = self._bus_name - if name in (None, "None", "none"): - self._bus = NoneBus() - elif name.lower() == "pcan": - self._bus = can.ThreadSafeBus(bustype="pcan", bitrate=PEAK_CAN_BIT_RATE) - else: - self._bus = can.ThreadSafeBus(bustype="socketcan", channel=name, bitrate=PEAK_CAN_BIT_RATE) - - def _reset_once(self): - self._bus.shutdown() - self._create_bus() - init_drives(self._bus) - param_drives(self._bus) - - def reset(self, tries=0): - self.set_command() - while tries < 4: - try: - self._reset_once() - logger.info("Reset successful.") - break - except CanError as e: - logger.warn("Reset failed with '{}'.".format(e)) - tries += 1 - time.sleep(1) - - def set_command(self, steering=0.0, throttle=0.0, is_teleop=True): - self._queue.appendleft((steering, throttle, is_teleop)) - - def run(self): - while not self._quit_event.is_set(): - try: - steering, throttle, is_teleop = self._queue[0] - self._bus.send(m_speed_ref(steering=steering, throttle=throttle, scale=self._scale, is_teleop=is_teleop)) - time.sleep(self._ms) - except CanError as be: - logger.error(be) - self.reset() - - def quit(self): - self._quit_event.set() - self._queue.clear() - try: - self._bus.shutdown() - except CanError: - pass - - -class IDSImagingThread(FrameThread): - def __init__(self, device_id=0, views=None, copy=True): - self._cam = Camera(device_id=device_id) - self._cam.init() - self._cam.set_parameters_from_memory() - self._cam.set_colormode(ueye.IS_CM_BGR8_PACKED) - self._cam.alloc() - self._cam.capture_video() - FrameThread.__init__(self, self._cam, views=views, copy=copy) - - def quit(self): - IDSImagingThread.stop(self) - IDSImagingThread.join(self) - self._cam.exit() - - -class TwistHandler(object): - def __init__(self, image_shape=CAMERA_SHAPE, bus_name=None, event=quit_event, steering_scale=1.0, fn_callback=(lambda x: x)): - super(TwistHandler, self).__init__() - self._image_shape = image_shape - self._bus_name = bus_name - self._fn_callback = fn_callback - self._gate = CanBusThread(bus=bus_name, event=event, scale=steering_scale) - self._gate.start() - self._queue1 = collections.deque(maxlen=2) - self._queue2 = collections.deque(maxlen=2) - self._camera1 = IDSImagingThread(device_id=1, views=partial(self._handle, _queue=self._queue1, do_call=True)) - self._camera2 = IDSImagingThread(device_id=2, views=partial(self._handle, _queue=self._queue2, do_call=False)) - self._camera1.start() - self._camera2.start() - - def _camera(self): - h, w, c = self._image_shape - img = np.zeros((2 * h, w, c), dtype=np.uint8) - try: - img[:h, :, :] = self._queue1[0] - img[h:, :, :] = self._queue2[0] - return img - except IndexError: - return img - - def _handle(self, image_data, _queue, do_call=True): - h, w, _ = self._image_shape - frame = cv2.resize(image_data.as_1d_image(), (w, h)) - _queue.appendleft(frame) - if do_call: - self._fn_callback(self._camera()) - - def quit(self): - self._gate.quit() - self._camera1.quit() - self._camera2.quit() - - def _drive(self, steering, throttle, driver=None): - try: - is_teleop = driver == "driver_mode.teleop.direct" - if not is_teleop and abs(throttle) < 1e-2: - steering, throttle = 0.0, 0.0 - self._gate.set_command(steering=steering, throttle=throttle, is_teleop=is_teleop) - except Exception as e: - logger.error("{}".format(e)) - - @staticmethod - def state(): - x, y = 0, 0 - return dict(x_coordinate=x, y_coordinate=y, heading=0, velocity=0, time=timestamp()) - - def noop(self): - self._drive(steering=0, throttle=0) - - def drive(self, cmd): - if cmd is not None: - self._drive(steering=cmd.get("steering"), throttle=cmd.get("throttle"), driver=cmd.get("driver")) - - -def main(): - parser = argparse.ArgumentParser(description="Exr main.") - parser.add_argument("--config", type=str, default="/config", help="Config directory path.") - args = parser.parse_args() - - parser = SafeConfigParser() - [parser.read(_f) for _f in glob.glob(os.path.join(args.config, "*.ini"))] - cfg = dict(parser.items("vehicle")) - cfg.update(dict(parser.items("platform"))) - - _process_frequency = int(cfg.get("clock.hz")) - _patience_micro = float(cfg.get("patience.ms", 200)) * 1000 - logger.info("Processing at {} Hz and a patience of {} ms.".format(_process_frequency, _patience_micro / 1000)) - - state_publisher = JSONPublisher(url="ipc:///byodr/vehicle.sock", topic="aav/vehicle/state") - image_publisher = ImagePublisher(url="ipc:///byodr/camera.sock", topic="aav/camera/0") - - _interface = cfg.get("can.interface") - _steering_scale = float(cfg.get("drive.motor.steering.scale")) - vehicle = TwistHandler(bus_name=_interface, steering_scale=_steering_scale, fn_callback=(lambda im: image_publisher.publish(im))) - threads = [] - pilot = ReceiverThread(url="ipc:///byodr/pilot.sock", topic=b"aav/pilot/output", event=quit_event) - threads.append(pilot) - [t.start() for t in threads] - - _period = 1.0 / _process_frequency - while not quit_event.is_set(): - command = pilot.get_latest() - _command_time = 0 if command is None else command.get("time") - _command_age = timestamp() - _command_time - _on_time = _command_age < _patience_micro - if _on_time: - vehicle.drive(command) - else: - vehicle.noop() - state_publisher.publish(vehicle.state()) - time.sleep(_period) - - logger.info("Waiting on vehicle to quit.") - vehicle.quit() - - logger.info("Waiting on threads to stop.") - [t.join() for t in threads] - - -if __name__ == "__main__": - logging.basicConfig(format=log_format) - logging.getLogger().setLevel(logging.INFO) - main() diff --git a/archived/exr/camera.py b/archived/exr/camera.py deleted file mode 100644 index 48dd9cef..00000000 --- a/archived/exr/camera.py +++ /dev/null @@ -1,227 +0,0 @@ -import logging - -from pyueye import ueye -from threading import Thread - -logger = logging.getLogger(__name__) - - -def get_bits_per_pixel(color_mode): - """ - returns the number of bits per pixel for the given color mode - raises exception if color mode is not is not in dict - """ - - return { - ueye.IS_CM_SENSOR_RAW8: 8, - ueye.IS_CM_SENSOR_RAW10: 16, - ueye.IS_CM_SENSOR_RAW12: 16, - ueye.IS_CM_SENSOR_RAW16: 16, - ueye.IS_CM_MONO8: 8, - ueye.IS_CM_RGB8_PACKED: 24, - ueye.IS_CM_BGR8_PACKED: 24, - ueye.IS_CM_RGBA8_PACKED: 32, - ueye.IS_CM_BGRA8_PACKED: 32, - ueye.IS_CM_BGR10_PACKED: 32, - ueye.IS_CM_RGB10_PACKED: 32, - ueye.IS_CM_BGRA12_UNPACKED: 64, - ueye.IS_CM_BGR12_UNPACKED: 48, - ueye.IS_CM_BGRY8_PACKED: 32, - ueye.IS_CM_BGR565_PACKED: 16, - ueye.IS_CM_BGR5_PACKED: 16, - ueye.IS_CM_UYVY_PACKED: 16, - ueye.IS_CM_UYVY_MONO_PACKED: 16, - ueye.IS_CM_UYVY_BAYER_PACKED: 16, - ueye.IS_CM_CBYCRY_PACKED: 16, - }[color_mode] - - -class uEyeException(Exception): - def __init__(self, error_code): - self.error_code = error_code - - def __str__(self): - return "Err: " + str(self.error_code) - - -def check(ret): - if ret != ueye.IS_SUCCESS: - raise uEyeException(ret) - - -class ImageBuffer: - def __init__(self): - self.mem_ptr = ueye.c_mem_p() - self.mem_id = ueye.int() - - -class MemoryInfo: - def __init__(self, h_cam, img_buff): - self.x = ueye.int() - self.y = ueye.int() - self.bits = ueye.int() - self.pitch = ueye.int() - self.img_buff = img_buff - - rect_aoi = ueye.IS_RECT() - check(ueye.is_AOI(h_cam, ueye.IS_AOI_IMAGE_GET_AOI, rect_aoi, ueye.sizeof(rect_aoi))) - self.width = rect_aoi.s32Width.value - self.height = rect_aoi.s32Height.value - - check(ueye.is_InquireImageMem(h_cam, self.img_buff.mem_ptr, self.img_buff.mem_id, self.x, self.y, self.bits, self.pitch)) - - -class ImageData: - def __init__(self, h_cam, img_buff, copy_=True): - self.h_cam = h_cam - self.img_buff = img_buff - self.mem_info = MemoryInfo(h_cam, img_buff) - self.color_mode = ueye.is_SetColorMode(h_cam, ueye.IS_GET_COLOR_MODE) - self.bits_per_pixel = get_bits_per_pixel(self.color_mode) - self.array = ueye.get_data(self.img_buff.mem_ptr, self.mem_info.width, self.mem_info.height, self.mem_info.bits, self.mem_info.pitch, copy_) - - def as_1d_image(self): - channels = int((7 + self.bits_per_pixel) / 8) - import numpy - - if channels > 1: - return numpy.reshape(self.array, (self.mem_info.height, self.mem_info.width, channels)) - else: - return numpy.reshape(self.array, (self.mem_info.height, self.mem_info.width)) - - def unlock(self): - check(ueye.is_UnlockSeqBuf(self.h_cam, self.img_buff.mem_id, self.img_buff.mem_ptr)) - - -class Rect: - def __init__(self, x=0, y=0, width=0, height=0): - self.x = x - self.y = y - self.width = width - self.height = height - - -class FrameThread(Thread): - def __init__(self, cam, views=None, copy=True): - super(FrameThread, self).__init__() - self.timeout = 1000 - self.cam = cam - self.running = True - self.views = views - self.copy = copy - - def run(self): - while self.running: - img_buffer = ImageBuffer() - ret = ueye.is_WaitForNextImage(self.cam.handle(), self.timeout, img_buffer.mem_ptr, img_buffer.mem_id) - if ret == ueye.IS_SUCCESS: - image_data = ImageData(self.cam.handle(), img_buffer, copy_=self.copy) - self.notify(image_data) - image_data.unlock() - # else: - # logger.warn(ret) - - def notify(self, image_data): - if self.views: - if type(self.views) is not list: - self.views = [self.views] - for view in self.views: - view(image_data) - - def stop(self): - self.cam.stop_video() - self.running = False - - -class Camera: - def __init__(self, device_id=0): - self.h_cam = ueye.HIDS(device_id) - self.img_buffers = [] - - def __enter__(self): - self.init() - return self - - def __exit__(self, _type, value, traceback): - self.exit() - - def handle(self): - return self.h_cam - - def alloc(self, buffer_count=3): - rect = self.get_aoi() - bpp = get_bits_per_pixel(self.get_colormode()) - - for buff in self.img_buffers: - check(ueye.is_FreeImageMem(self.h_cam, buff.mem_ptr, buff.mem_id)) - - for i in range(buffer_count): - buff = ImageBuffer() - ueye.is_AllocImageMem(self.h_cam, rect.width, rect.height, bpp, buff.mem_ptr, buff.mem_id) - - check(ueye.is_AddToSequence(self.h_cam, buff.mem_ptr, buff.mem_id)) - - self.img_buffers.append(buff) - - ueye.is_InitImageQueue(self.h_cam, 0) - - def init(self): - ret = ueye.is_InitCamera(self.h_cam, None) - if ret != ueye.IS_SUCCESS: - self.h_cam = None - raise uEyeException(ret) - - return ret - - def set_parameters_from_memory(self): - ret = ueye.is_ParameterSet(self.h_cam, ueye.IS_PARAMETERSET_CMD_LOAD_EEPROM, None, 0) - return ret - - def exit(self): - ret = None - if self.h_cam is not None: - ret = ueye.is_ExitCamera(self.h_cam) - if ret == ueye.IS_SUCCESS: - self.h_cam = None - - def get_aoi(self): - rect_aoi = ueye.IS_RECT() - ueye.is_AOI(self.h_cam, ueye.IS_AOI_IMAGE_GET_AOI, rect_aoi, ueye.sizeof(rect_aoi)) - - return Rect(rect_aoi.s32X.value, rect_aoi.s32Y.value, rect_aoi.s32Width.value, rect_aoi.s32Height.value) - - def set_aoi(self, x, y, width, height): - rect_aoi = ueye.IS_RECT() - rect_aoi.s32X = ueye.int(x) - rect_aoi.s32Y = ueye.int(y) - rect_aoi.s32Width = ueye.int(width) - rect_aoi.s32Height = ueye.int(height) - - return ueye.is_AOI(self.h_cam, ueye.IS_AOI_IMAGE_SET_AOI, rect_aoi, ueye.sizeof(rect_aoi)) - - def capture_video(self, wait=False): - wait_param = ueye.IS_WAIT if wait else ueye.IS_DONT_WAIT - return ueye.is_CaptureVideo(self.h_cam, wait_param) - - def stop_video(self): - return ueye.is_StopLiveVideo(self.h_cam, ueye.IS_FORCE_VIDEO_STOP) - - def freeze_video(self, wait=False): - wait_param = ueye.IS_WAIT if wait else ueye.IS_DONT_WAIT - return ueye.is_FreezeVideo(self.h_cam, wait_param) - - def set_colormode(self, colormode): - check(ueye.is_SetColorMode(self.h_cam, colormode)) - - def get_colormode(self): - ret = ueye.is_SetColorMode(self.h_cam, ueye.IS_GET_COLOR_MODE) - return ret - - def get_format_list(self): - count = ueye.UINT() - check(ueye.is_ImageFormat(self.h_cam, ueye.IMGFRMT_CMD_GET_NUM_ENTRIES, count, ueye.sizeof(count))) - format_list = ueye.IMAGE_FORMAT_LIST(ueye.IMAGE_FORMAT_INFO * count.value) - format_list.nSizeOfListEntry = ueye.sizeof(ueye.IMAGE_FORMAT_INFO) - format_list.nNumListElements = count.value - check(ueye.is_ImageFormat(self.h_cam, ueye.IMGFRMT_CMD_GET_LIST, format_list, ueye.sizeof(format_list))) - return format_list diff --git a/archived/exr/docker-exr.yml b/archived/exr/docker-exr.yml deleted file mode 100644 index 17562dbe..00000000 --- a/archived/exr/docker-exr.yml +++ /dev/null @@ -1,35 +0,0 @@ -version: '2' -services: - inference: - build: - context: ./ - dockerfile: inference/Dockerfile.jp - volumes: - - './inference:/app' - devices: - - '/dev/nvhost-ctrl' - - '/dev/nvhost-ctrl-gpu' - - '/dev/nvhost-prof-gpu' - - '/dev/nvmap' - - '/dev/nvhost-gpu' - - '/dev/nvhost-as-gpu' - vehicle: - privileged: true - user: 'root' - build: - context: ./ - dockerfile: vehicles/exr/Dockerfile - image: centipede2donald/byodr-ce:vehicle-exr - volumes: - - ./common:/common - - ./docker/deps:/deps - - ./vehicles/exr:/app - - /dev/ueye:/dev/ueye - - /dev/pcan32:/dev/pcan32 - - /dev/pcan-usb:/dev/pcan-usb - - /dev/pcanusb32:/dev/pcanusb32 - - ${DC_CONFIG_DIR}:/config - volumes_from: - - teleop:rw - - diff --git a/archived/exr/exr_entry.sh b/archived/exr/exr_entry.sh deleted file mode 100644 index 94513d35..00000000 --- a/archived/exr/exr_entry.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/env bash -set -e - -/bin/bash -c "/etc/init.d/ueyeusbdrc start" - -# setup ros environment -source "/opt/ros/$ROS_DISTRO/setup.bash" - -#exec "$@" -exec su - app -c "export PYTHONPATH=.:/common \\ - && export LD_LIBRARY_PATH=/deps/peak-linux-driver-8.9.3/libpcanbasic/pcanbasic:/deps/peak-linux-driver-8.9.3/lib/lib \\ - && cd /app && $*" diff --git a/archived/exr/requirements.txt b/archived/exr/requirements.txt deleted file mode 100644 index 5f02e541..00000000 --- a/archived/exr/requirements.txt +++ /dev/null @@ -1,3 +0,0 @@ -jsoncomment >=0.3, <1.0 -python-can >=3.3.2, <4.0 -pyueye >=4.90.0, <5.0 diff --git a/archived/odometry.py b/archived/odometry.py deleted file mode 100644 index dff62829..00000000 --- a/archived/odometry.py +++ /dev/null @@ -1,67 +0,0 @@ -import logging -import multiprocessing -import signal -import time - -from gpiozero import DigitalInputDevice - -from byodr.utils import timestamp -from byodr.utils.ipc import JSONPublisher - -logger = logging.getLogger(__name__) -log_format = "%(levelname)s: %(filename)s %(funcName)s %(message)s" - -quit_event = multiprocessing.Event() - -signal.signal(signal.SIGINT, lambda sig, frame: _interrupt()) -signal.signal(signal.SIGTERM, lambda sig, frame: _interrupt()) - - -def _interrupt(): - logger.info("Received interrupt, quitting.") - quit_event.set() - - -class HallRps(object): - def __init__(self, pin=22, moment=0.10): - self._moment = moment - self._detect_time, self._up, self._rps = 0, 0, 0 - self._sensor = DigitalInputDevice(pin=pin, pull_up=True) - self._sensor.when_activated = self._detect - - def tick(self): - # Drop to zero when stopped. - if timestamp() - self._detect_time > 1e5: - self._rps = (1.0 - self._moment) * self._rps - self._rps = self._rps if self._rps > 1e-4 else 0 - - def rps(self): - return self._rps - - def debug(self): - return "{} {} {}".format(self._sensor.value, self._sensor.is_active, self._rps) - - def _detect(self): - # self._up += 1 - # if self._up >= 2: - h_val = 1e6 / (timestamp() - self._detect_time) - self._rps = self._moment * h_val + (1.0 - self._moment) * self._rps - self._detect_time = timestamp() - self._up = 0 - - -def main(): - p_odo = JSONPublisher(url="tcp://0.0.0.0:5560", topic="ras/sensor/odometer") - s_hall = HallRps() - - rate = 0.02 # 50 Hz. - while not quit_event.is_set(): - p_odo.publish(data=dict(rps=s_hall.rps())) - s_hall.tick() - time.sleep(rate) - - -if __name__ == "__main__": - logging.basicConfig(format=log_format) - logging.getLogger().setLevel(logging.INFO) - main() diff --git a/archived/ros-gstreamer.dockerfile b/archived/ros-gstreamer.dockerfile deleted file mode 100644 index 84bccc2d..00000000 --- a/archived/ros-gstreamer.dockerfile +++ /dev/null @@ -1,32 +0,0 @@ -# docker build -f -t centipede2donald/ros-melodic:python27-opencv32-gstreamer10 . -FROM ros:melodic - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install -y --no-install-recommends \ - build-essential \ - python-dev \ - python-opencv \ - python-pip \ - python-setuptools \ - python-wheel \ - python-zmq \ - wget - -RUN apt-get update && apt-get install -f -y \ - gstreamer1.0-plugins-base \ - gstreamer1.0-plugins-good \ - gstreamer1.0-plugins-bad \ - gstreamer1.0-plugins-ugly \ - gstreamer1.0-libav \ - gstreamer1.0-doc \ - gstreamer1.0-tools \ - gstreamer1.0-x \ - gstreamer1.0-alsa \ - gstreamer1.0-gl \ - gstreamer1.0-gtk3 \ - gstreamer1.0-qt5 \ - gstreamer1.0-pulseaudio \ - libgstreamer1.0-0 \ - python-gi \ - && rm -rf /var/lib/apt/lists/* diff --git a/archived/ros-serial.dockerfile b/archived/ros-serial.dockerfile deleted file mode 100644 index 03cb52d4..00000000 --- a/archived/ros-serial.dockerfile +++ /dev/null @@ -1,8 +0,0 @@ -# docker build -f -t centipede2donald/ros-melodic:rosserial . -FROM ros:melodic - -RUN apt-get update && apt-get install -y --no-install-recommends \ - ros-melodic-rosserial \ - && rm -rf /var/lib/apt/lists/* - -ENTRYPOINT ["/ros_entrypoint.sh"] \ No newline at end of file diff --git a/archived/setup.py b/archived/setup.py deleted file mode 100755 index 94716d7b..00000000 --- a/archived/setup.py +++ /dev/null @@ -1,475 +0,0 @@ -#!/usr/bin/python3 -import datetime -import json -import logging -import os -import subprocess - -logger = logging.getLogger(__name__) - - -def _run(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True): - return subprocess.run(args, stdout=stdout, stderr=stderr, universal_newlines=universal_newlines) - - -def do_build_directory(user, group, build_dirname): - if os.path.exists(build_dirname): - return "The build directory already exists." - _umask = os.umask(000) - os.mkdir("build", mode=0o775) - os.umask(_umask) - _run(["chown", "{}:{}".format(user, group), build_dirname]) - _run(["chmod", "g+s", "build"]) - return "The build directory was created." - - -def remove_udev_rules(fname): - if os.path.exists(fname): - _run(["rm", fname]) - _run(["/etc/init.d/udev", "reload"]) - return "Removed {}.".format(fname) - - -def create_udev_rules(fname, contents): - if os.path.exists(fname): - return "File {} exists.".format(fname) - with open(fname, mode="w") as f: - f.write(contents) - _run(["/etc/init.d/udev", "reload"]) - return "Created {}.".format(fname) - - -def pull_docker_images(proceed, docker_files, environment, fn_callback): - if proceed: - _command = ["docker-compose"] + [y for x in map(lambda df: ["-f", df], docker_files) for y in x] + ["pull"] - process = subprocess.Popen(_command, env=environment, stdout=subprocess.PIPE) - while True: - if process.poll() is not None: - break - output = process.stdout.readline() - if output: - fn_callback(str(output.strip())) - return _run(["docker", "images"]).stdout - else: - return "Skipped docker images pull." - - -def stop_and_remove_services(service_name, system_directory="/etc/systemd/system"): - # Stop the services and remove any existing service definitions. - _run(["systemctl", "stop", service_name]) - _run(["systemctl", "disable", service_name]) - _run(["rm", os.path.join(system_directory, service_name)]) - return "Stopped and removed {}.".format(service_name) - - -def create_services(service_name, service_contents, system_directory="/etc/systemd/system"): - with open(os.path.join(system_directory, service_name), mode="w") as f: - f.write(service_contents) - _run(["systemctl", "daemon-reload"]) - _run(["systemctl", "enable", service_name]) - return _run(["systemctl", "list-unit-files", service_name]).stdout - - -def start_services(service_name): - _run(["systemctl", "start", service_name]) - return _run(["systemctl", "list-units", service_name]).stdout - - -class TegraStateManager(object): - _f_tegra_release = "/etc/nv_tegra_release" - _f_cuda_version = "/usr/local/cuda/version.txt" - - def __init__(self, build_dirname="build", log_prefix="setup", default_application_dirname="byodrapp"): - self.build_dirname = build_dirname - self.log_filename = log_prefix + ".log" - self.json_filename = log_prefix + ".json" - self.default_application_dirname = default_application_dirname - self.state = dict() - self.open_log = None - self._init_state() - - def _init_state(self): - # Find the terminal user regardless of nested sudo. - _user = _run(["who", "am", "i"]).stdout.split(" ")[0] - self.state["user.name"] = _user - self.state["user.group"] = _run(["id", "-gn", _user]).stdout.strip() - self.state["user.home"] = os.environ["HOME"] - # Determine the tegra release from disk. - _tegra_release = "" - _fname = TegraStateManager._f_tegra_release - if os.path.exists(_fname): - _release, _revision = [x.strip() for x in _run(["head", "-n", "1", _fname]).stdout.split(",")[:2]] - _tegra_release = "{}.{}".format(_release.replace("# R", "").replace(" (release)", ""), _revision.replace("REVISION: ", "")) - self.state["tegra.release"] = _tegra_release - # Determine the CUDA version. - _fname = TegraStateManager._f_cuda_version - self.state["cuda.version"] = _run(["cat", _fname]).stdout.strip().split(" ")[-1] if os.path.exists(_fname) else "" - _file = os.path.join(self.build_dirname, self.json_filename) - # Read previous state if any. - if os.path.exists(_file): - with open(_file, mode="r") as workfile: - try: - self.state.update(json.load(workfile)) - except Exception as e: - logger.warning("Exception reading json file: " + str(e) + "\n") - # Inits. - if "application.directory" not in self.state.keys(): - self.state["application.directory"] = os.path.join(self.state["user.home"], self.default_application_dirname) - - def get_application_directory(self): - return self.state["application.directory"] - - def get_sessions_directory(self): - return os.path.join(self.get_application_directory(), "sessions") - - def get_config_directory(self): - return os.path.join(self.get_application_directory(), "config") - - def get_config_filepath(self): - return os.path.join(self.get_config_directory(), "config.ini") - - def set_application_directory(self, dirname): - self.state["application.directory"] = dirname - - def __enter__(self): - self.open_log = open(os.path.join(self.build_dirname, self.log_filename), mode="a") - return self - - def __exit__(self, *args): - self.open_log.close() - with open(os.path.join(self.build_dirname, self.json_filename), mode="w") as f: - json.dump({"application.directory": self.state["application.directory"]}, f) - - def log(self, text, new_line=True): - self.open_log.write(text) - if new_line: - self.open_log.write("\n") - - -class TegraInstaller(object): - _supported_tegra_releases = ("32.2.0", "32.2.1", "32.3.1") - - # The constants are the same as those used in the application docker builds. - APP_USER_ID = 1990 - APP_GROUP_ID = 1990 - APP_USER_NAME = "byodr" - APP_GROUP_NAME = "byodr" - - _udev_relay_rules_file = "/etc/udev/rules.d/99-byodr.rules" - _udev_relay_rules_contents = """ -SUBSYSTEM=="usb", ATTRS{idVendor}=="1a86", ATTRS{idProduct}=="7523", GROUP="{group}", MODE="0660", SYMLINK+="usbrelay", TAG+="systemd" -""".format( - **{"idVendor": "{idVendor}", "idProduct": "{idProduct}", "group": APP_GROUP_NAME} - ) - - _systemd_system_directory = "/etc/systemd/system" - _systemd_service_name = "byodr.service" - _systemd_service_template = """ -[Unit] -Description=Byodr CE Onboard Processes -Requires=docker.service -After=docker.service - -[Service] -User={sd_service_user} -Group={sd_service_group} -Environment=DC_CONFIG_DIR={sd_config_dir} -Environment=DC_RECORDER_SESSIONS={sd_sessions_dir} -TimeoutStartSec=0 -Restart=on-failure -ExecStart=/usr/bin/docker-compose {sd_compose_files} up -ExecStop=/usr/bin/docker-compose {sd_compose_files} down -v - -[Install] -WantedBy=multi-user.target -""" - - _application_config_template = """ -[camera] -camera.user = user1 -camera.password = HaikuPlot876 -camera.ptz.flip = tilt - -[vehicle] -ras.master.uri = tcp://raspberrypi -ras.throttle.domain.forward.shift = 4 -ras.throttle.domain.backward.shift = 0 -ras.throttle.domain.scale = 10 -ras.throttle.reverse.gear = -25 -""" - - def __init__(self, script_location, build_dirname="build"): - self._docker_files = [os.path.join(script_location, "docker", "docker-compose.yml"), os.path.join(script_location, "docker", "docker-compose.tegra.yml")] - self.manager = TegraStateManager(build_dirname=build_dirname) - - def get_state(self): - return self.manager.state - - def get_user(self): - return self.manager.state["user.name"] - - def get_group(self): - return self.manager.state["user.group"] - - @staticmethod - def use_application_directory(): - return True - - def get_application_directory(self): - return self.manager.get_application_directory() - - def __enter__(self): - self.manager.__enter__() - return self - - def __exit__(self, *args): - self.manager.__exit__(*args) - - def log(self, text, new_line=True): - self.manager.log(text, new_line) - - def do_application_user(self): - ti = TegraInstaller - user = self.get_user() - _passwd_line = _run(["grep", ti.APP_USER_NAME, "/etc/passwd"]).stdout.strip() - if "{}:".format(ti.APP_USER_NAME) in _passwd_line: - return _passwd_line - _run(["groupadd", "--gid", str(ti.APP_GROUP_ID), "--system", ti.APP_GROUP_NAME]) - _run(["useradd", "--uid", str(ti.APP_USER_ID), "--gid", str(ti.APP_GROUP_ID), "--system", ti.APP_USER_NAME]) - # Make sure the regular user has group access as well. - _run(["usermod", "-aG", ti.APP_GROUP_NAME, user]) - return "Created user {}:{} and group {}:{}.".format(ti.APP_USER_NAME, ti.APP_USER_ID, ti.APP_GROUP_NAME, ti.APP_GROUP_ID) - - def do_application_directory(self, application_dir): - ti = TegraInstaller - user = self.get_user() - self.manager.set_application_directory(application_dir) - _umask = os.umask(000) - os.makedirs(application_dir, mode=0o775, exist_ok=True) - os.makedirs(self.manager.get_config_directory(), mode=0o775, exist_ok=True) - os.makedirs(self.manager.get_sessions_directory(), mode=0o775, exist_ok=True) - os.umask(_umask) - _run(["chown", "-R", "{}:{}".format(user, ti.APP_GROUP_NAME), self.manager.get_config_directory()]) - _run(["chown", "-R", "{}:{}".format(user, ti.APP_GROUP_NAME), self.manager.get_sessions_directory()]) - _run(["chmod", "-R", "g+s", self.manager.get_sessions_directory()]) - return "The application directory is {}.".format(application_dir) - - def do_application_config_file(self): - ti = TegraInstaller - user = self.get_user() - config_file = self.manager.get_config_filepath() - if not os.path.exists(config_file): - with open(config_file, mode="w") as f: - f.write(ti._application_config_template) - _run(["chmod", "664", config_file]) - _run(["chown", "{}:{}".format(user, ti.APP_GROUP_NAME), config_file]) - return _run(["cat", config_file]).stdout - - def pull_docker_images(self, proceed, fn_output): - return pull_docker_images(proceed, self._docker_files, {"DC_CONFIG_DIR": "", "DC_RECORDER_SESSIONS": ""}, fn_output) - - @staticmethod - def stop_and_remove_services(): - result = remove_udev_rules(TegraInstaller._udev_relay_rules_file) - return result + "\n" + stop_and_remove_services(TegraInstaller._systemd_service_name) - - def create_services(self): - result = create_udev_rules(TegraInstaller._udev_relay_rules_file, TegraInstaller._udev_relay_rules_contents) - _m = { - "sd_service_user": self.get_user(), - "sd_service_group": self.get_group(), - "sd_config_dir": self.manager.get_config_directory(), - "sd_sessions_dir": self.manager.get_sessions_directory(), - "sd_compose_files": " ".join("-f {}".format(name) for name in self._docker_files), - } - _contents = TegraInstaller._systemd_service_template.format(**_m) - return result + "\n" + create_services(TegraInstaller._systemd_service_name, _contents) - - @staticmethod - def start_services(): - return start_services(TegraInstaller._systemd_service_name) - - -class PiInstaller(object): - _systemd_system_directory = "/etc/systemd/system" - _systemd_service_name = "byodr.service" - _systemd_service_template = """ -[Unit] -Description=Byodr CE Onboard Processes -Requires=docker.service -After=docker.service - -[Service] -User={sd_service_user} -Group={sd_service_group} -TimeoutStartSec=0 -Restart=on-failure -ExecStart=/usr/bin/docker-compose {sd_compose_files} up -ExecStop=/usr/bin/docker-compose {sd_compose_files} down -v - -[Install] -WantedBy=multi-user.target -""" - - def __init__(self, script_location, build_dirname="build", log_prefix="setup"): - self.build_dirname = build_dirname - self._docker_files = [os.path.join(script_location, "raspi", "docker-compose.yml")] - self.log_filename = log_prefix + ".log" - self.state = dict() - self.open_log = None - self._init_state() - - def _init_state(self): - # Find the terminal user regardless of nested sudo. - _user = _run(["who", "am", "i"]).stdout.split(" ")[0] - self.state["user.name"] = _user - self.state["user.group"] = _run(["id", "-gn", _user]).stdout.strip() - self.state["user.home"] = os.environ["HOME"] - - def get_state(self): - return self.state - - def get_user(self): - return self.state["user.name"] - - def get_group(self): - return self.state["user.group"] - - @staticmethod - def use_application_directory(): - return False - - @staticmethod - def do_application_user(): - return "Ok" - - def __enter__(self): - self.open_log = open(os.path.join(self.build_dirname, self.log_filename), mode="a") - return self - - def __exit__(self, *args): - self.open_log.close() - - def log(self, text, new_line=True): - self.open_log.write(text) - if new_line: - self.open_log.write("\n") - - def pull_docker_images(self, proceed, fn_output): - return pull_docker_images(proceed, self._docker_files, {"DC_CONFIG_DIR": "", "DC_RECORDER_SESSIONS": ""}, fn_output) - - @staticmethod - def stop_and_remove_services(): - return stop_and_remove_services(PiInstaller._systemd_service_name) - - def create_services(self): - _m = {"sd_service_user": self.get_user(), "sd_service_group": self.get_group(), "sd_compose_files": " ".join("-f {}".format(name) for name in self._docker_files)} - _contents = PiInstaller._systemd_service_template.format(**_m) - return create_services(PiInstaller._systemd_service_name, _contents) - - @staticmethod - def start_services(): - return start_services(PiInstaller._systemd_service_name) - - -def create_installer(script_location, build_dirname): - machine = _run(["uname", "-m"]).stdout.strip() - if machine == "armv7l": - return PiInstaller(script_location=script_location, build_dirname=build_dirname) - else: - return TegraInstaller(script_location=script_location, build_dirname=build_dirname) - - -def main(): - # This script must be run as root. - if os.environ["USER"] != "root": - print("Please run this script as root, sudo ./{}".format(os.path.basename(__file__))) - exit(-1) - - _script_location = os.path.join(os.getcwd(), os.path.dirname(__file__)) - build_dirname = "build" - - installer = create_installer(script_location=_script_location, build_dirname=build_dirname) - state, user, group = installer.get_state(), installer.get_user(), installer.get_group() - print("Checking build directory.") - _result = do_build_directory(user, group, build_dirname) - print(_result) - - with installer: - installer.log("\n--- {} ---".format(datetime.datetime.today().strftime("%a %b %d %H:%M:%S %Y"))) - print("Welcome to the installer, the following information will be used:") - max_len = max(len(l) for l in state.keys()) - for key in state.keys(): - print("{}{}".format(key.ljust(max_len + 1), state[key])) - - print("\nChecking application user and group.") - _result = installer.do_application_user() - installer.log(_result) - print(_result) - - if installer.use_application_directory(): - _app_dir = installer.get_application_directory() - print("\nChecking application directory.") - print("Which directory do you want to use to store data in?") - print("Type the full path or to use {} or ^C to quit.".format(_app_dir)) - print("> ", end="") - _answer = input() - if _answer.startswith("/") or _answer.startswith("~"): - _app_dir = os.path.expanduser(_answer) - else: - print("\nYour path did not start with '/' or '~', ignored.") - _result = installer.do_application_directory(_app_dir) - installer.log(_result) - print(_result) - # Make sure there is an application config file. - print("\nChecking application configuration file.") - _result = installer.do_application_config_file() - installer.log(_result) - print(_result) - - print("\nThis step requires downloading large binary files which takes time and bandwidth, do you want to proceed?") - print("Type y or no then ") - print("> ", end="") - - def _fn_output(s): - installer.log(s) - print(s) - - _answer = input() - _docker_proceed = _answer == "y" - _result = installer.pull_docker_images(_docker_proceed, _fn_output) - installer.log(_result) - print(_result) - - print("\nDo you want to (re)create the system services?") - print("> ", end="") - _answer = input() - if _answer == "y": - print("\nRemoving systemd services.") - _result = installer.stop_and_remove_services() - installer.log(_result) - print(_result) - - print("\nRecreating systemd services.") - _result = installer.create_services() - installer.log(_result) - print(_result) - - print("\nDo you want to start the services?") - print("> ", end="") - _answer = input() - if _answer == "y": - _result = installer.start_services() - installer.log(_result) - print(_result) - - print("") - print("The installer finished and a log file can be found in directory '{}'.".format(build_dirname)) - print("") - - -if __name__ == "__main__": - try: - main() - except KeyboardInterrupt: - print("") diff --git a/docker-compose.carla.yml b/docker-compose.carla.yml deleted file mode 100644 index 102bd25d..00000000 --- a/docker-compose.carla.yml +++ /dev/null @@ -1,64 +0,0 @@ -version: '2' -services: - zerotier: - image: rwgrim/docker-noop:latest - restart: 'no' - command: ["/bin/true"] - wireguard: - image: rwgrim/docker-noop:latest - restart: 'no' - command: ["/bin/true"] - httpd: - restart: 'no' - volumes: - - ./common:/common - - ./httpd:/app - ftpd: - restart: 'no' - volumes: - - ./common:/common - - ./ftpd:/app - rosnode: - restart: 'no' - volumes: - - ./common:/common - - ./rosnode:/app - mongodb: - restart: 'no' - volumes: - - ./common:/common - - ./mongodb:/app - vehicle: - # Need extra cpu resources to run the h264 encoder sockets. - cpuset: "1,4,5" - build: - context: . - dockerfile: vehicles/carla09/Dockerfile - image: centipede2donald/byodr-ce:carla-099 - restart: 'no' - network_mode: host - volumes: - - ./common:/common - - ./vehicles/carla09:/app - teleop: - restart: 'no' - volumes: - - ./common:/common - - ./teleop:/app - pilot: - restart: 'no' - volumes: - - ./common:/common - - ./pilot:/app - inference: - build: - context: . - dockerfile: inference/runtime-cp36-x86.dockerfile - image: centipede2donald/byodr-ce:inference-carla - restart: 'no' - ipc: host - environment: - - NVIDIA_VISIBLE_DEVICES=all - volumes: - - ./common:/common - - ./inference:/app diff --git a/docker-compose.override.yml b/docker-compose.override.yml deleted file mode 100644 index 68fcda94..00000000 --- a/docker-compose.override.yml +++ /dev/null @@ -1,5 +0,0 @@ -version: '3' # or whatever version you're using - -services: - pilot: - cpuset: '' # Clearing the CPU set constraint for testing diff --git a/docker-compose.test.yml b/docker-compose.test.yml deleted file mode 100644 index e27e9a69..00000000 --- a/docker-compose.test.yml +++ /dev/null @@ -1,53 +0,0 @@ -version: '2' -services: - zerotier: - image: rwgrim/docker-noop:latest - restart: 'no' - command: ["/bin/true"] - httpd: - image: rwgrim/docker-noop:latest - restart: 'no' - command: ["/bin/true"] - ftpd: - image: rwgrim/docker-noop:latest - restart: 'no' - command: ["/bin/true"] - rosnode: - image: rwgrim/docker-noop:latest - restart: 'no' - command: ["/bin/true"] - raspi: - build: - context: . - dockerfile: raspi/Dockerfile - image: centipede2donald/byodr-ce:servos - user: root - restart: 'no' - command: ["python", "-m", "pytest", "-vvv", "tests.py"] - volumes: - - ./common:/common - - ./raspi:/app - vehicle: - restart: 'no' - command: ["python", "-m", "pytest", "-vvv", "tests_rover.py"] - volumes: - - ./common:/common - - ./vehicles/rover:/app - teleop: - restart: 'no' - command: ["python", "-m", "pytest", "-vvv", "tests.py"] - volumes: - - ./common:/common - - ./teleop:/app - pilot: - restart: 'no' - command: ["python", "-m", "pytest", "-vvv", "tests.py"] - volumes: - - ./common:/common - - ./pilot:/app - inference: - restart: 'no' - command: ["python3", "-m", "pytest", "-vvv", "inference/tests.py"] - volumes: - - ./common:/common - - ./inference:/app diff --git a/docker-compose.yml b/docker-compose.yml index c84fe675..13e6ee95 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -9,44 +9,6 @@ volumes: volume_byodr_sockets: volume_byodr_sessions: services: - zerotier: - cpuset: '0' - image: zyclonite/zerotier:1.6.6 - restart: always - network_mode: host - devices: - - '/dev/net/tun' - cap_add: - - SYS_ADMIN - - NET_ADMIN - - CAP_SYS_RAWIO - volumes: - - volume_zerotier_config:/var/lib/zerotier-one:rw - wireguard: - cpuset: '0' - image: masipcat/wireguard-go - container_name: wireguard - restart: always - network_mode: host - devices: - - '/dev/net/tun' - cap_add: - - SYS_ADMIN - - NET_ADMIN - - CAP_SYS_RAWIO - volumes: - - volume_wireguard_config:/etc/wireguard:rw - httpd: - cpuset: '0' - build: - context: . - dockerfile: httpd/Dockerfile - restart: always - network_mode: host - command: ['python', 'wrap.py'] - stop_signal: SIGKILL - volumes: - - volume_byodr_config:/config:rw ftpd: cpuset: '0' build: @@ -61,18 +23,6 @@ services: volumes: - volume_ftpd_config:/etc/pureftpd:rw - volume_byodr_sessions:/home/ftpuser:rw - rosnode: - cpuset: '0' - build: - context: . - dockerfile: rosnode/Dockerfile - restart: always - command: ['python3', 'app.py', '--name', 'rosnode'] - network_mode: host - stop_signal: SIGKILL - volumes: - - volume_byodr_sockets:/byodr:rw - - volume_byodr_config:/config:ro mongodb: cpuset: '0' build: diff --git a/docs/Services_Documentation.txt b/docs/Services_Documentation.txt deleted file mode 100644 index 0bc87099..00000000 --- a/docs/Services_Documentation.txt +++ /dev/null @@ -1,222 +0,0 @@ -Service Architecture -Smart Segment -A smart segment is part of the robot that houses computer systems inside that allow it to move autonomously. Connected segments make the robot act like a caterpillar. -This computer system includes a Raspberry Pi, a Jetson Nano, 2 cameras, a router and 2 motor controllers. - -Hardware -1) AI Camera (camera0) -A smart segment uses Hikvision PTZ Dome Network camera for its AI Camera. -IP: 192.168.1.64 --Input: Footage from its surroundings --Output: Sends H264 encoded video ouput to the Pi’s Docker container service called “Stream0”. - -2) Operator Camera (camera1) -The smart segment also uses a 2nd Hikvision PTZ Dome Network camera for an Operator Camera. -IP: 192.168.1.65 --Input: Footage from its surroundings --Output: Sends H264 encoded video output to the Pi’s Docker container service called “Stream1”. - -3) Raspberry Pi 4B -OS: balena-cloud-byodr- pi-raspberrypi4-64-2.99.27-v14.0.8 -IP: 192.168.1.32 -This OS allows it to communicate with Balena Cloud. Inside the Pi, there are 5 processes running, 4 of which run in their own separate Docker containers. - -4) Nvidia Jetson Nano -OS: balena-cloud-byodr-nano-jetson-nano-2.88.4+rev1-v12.11.0 -IP: 192.168.1.100 -This OS allows it to communicate with Balena Cloud. Inside the Nano, there are 10 processes running, all of which run in their own separate Docker containers. - -5) RUT-955 -IP: 192.168.1.1 -The router inside the segment is called RUT955 from Teltonika. The router has LAN, WAN, 4G, 5G and LTE capabilities. It’s ethernet connectivity is extended with a switch. The router is responsible for all internet connectivity between the segment and the rest of the Internet. -This router also includes an internal relay that works as a switch that lets the battery power the rest of the segment. Only when the router is booted up and the relay switch closes, will the segment receive power to the rest of its internal components. - -6) Motor Controller 1 -The segment uses the Mini FSESC6.7 from Flipsky. It is connected via USB to the ttyACM0 serial port of the Pi. --Input: Commands from the Pi. --Output: Sends power to its respective motor wheel in order to turn it according to its commands. - -7) Motor Controller 2 -The segment uses the Mini FSESC6.7 from Flipsky. It is connected via USB to the ttyACM1 serial port of the Pi. --Input: Commands from the Pi. --Output: Sends power to its respective motor wheel in order to turn it according to its commands. - -Software stack -1) Balena -From Balena, we use their Balena Cloud services, and also use BalenaOS on the Raspberry Pi and Jetson Nano, to make them compatible with Balena Cloud. From the Balena Cloud we can upload new versions of software, update segments OTA, reboot, connect via SSH, and manage segments remotely. - -2) Docker -Docker is a platform for building, shipping, and running applications in containers. Containers are lightweight, portable, and self-sufficient units that contain all the necessary software, libraries, and dependencies to run an application. Docker enables developers to package their applications into containers, which can be easily deployed and run on any platform that supports Docker. With Docker, developers can ensure that their applications run consistently across different environments, from development to production. -The BYODR project includes dockerfiles that can be used to build a Docker image for each service as well as instructions on how to deploy the image onto a robot using Balena Cloud. By using this approach, users can ensure that the software stack is consistent and reproducible across multiple robots, and can easily deploy updates and manage their fleet of cars from a central location. - -3) Zerotier -Zerotier is a “freemium” P2P (Peer to Peer) VPN service that allows devices with internet capabilities to securely connect to P2P virtual software-defined networks. -The Pi has a Zerotier instance running inside it. This means that it is equipped to work with a Zerotier client that is running on our devices, so that we can add the Pi to our VPN network. -Similarly to the Pi, the Nano also has the same functionality regarding Zerotier, although arguably more important here, since it allows the User to connect to the Nano, and by extension the Web server, via a secure zerotier network. - -4) Wireguard -Similarly to Zerotier, Wireguard is also a VPN. The difference here is that Wireguard is used by the Nano in every network procedure it has to go through for safety. Since the Nano has plenty more processes that require a network connection, compared to the Pi, Wireguard is an extra layer of security against attacks. This process is running inside a docker container. --Q: Why do we use Wireguard if we have ZT? --A: Since ZeroTier and WireGuard look similar, the project uses both ZeroTier and WireGuard for different purposes. ZeroTier is used to create a secure network connection between the robot and the user's computer, while WireGuard is used to encrypt the data that is transmitted over that connection. Together, these technologies provide a secure and reliable way for users to remotely control the robots. - -Raspberry Pi docker service descriptions: -1) Stream0 --Input: Receives video stream from the AI camera. --Function: Creates a high quality H264 video output stream --Output: Sends the stream via RTSP to the web server located in Teleop. --Q1: Why does the Pi create the streams, and not just send them from the cameras directly to the nano, bypassing the Pi? - -2) Stream1 --Input: Receives video stream from the Operator camera. --Purpose: Similarly to the process above, it creates a high quality H264 video output stream. --Output: Sends the stream via RTSP to the web server located in Teleop. --Q1: How does the AI get the images for itself? From the H264 stream, or someplace else? - -3) Zerotier --Input: Receives input from the user, using the built-in command line. --Function: We can add the Pi to our VPN network. --Output: The Pi can communicate with the software-defined virtual networks that the user has built, via the internet. --Q1: Why does the Pi need the zerotier? - -4) Servos --Input: Receives commands in JSON format from Teleop, Inference, Pilot that request movement from the motors. --Function: Sets up a JSON server that listens on 0.0.0.0:5555 for commands from other processes. Listening to 0.0.0.0 means listening from anywhere that has network access to this device. It also sets up a JSON Publisher so that this service can send JSON data to any services that are listening to this service. Decodes commands received from the other services are decoded and stored in a deque. -This service also initiates an http server listening to port 9101 (default option). --Output 1: The commands are sent to the Motor controllers via the serial USB connection. --Output 2: --Q1: BalenaCloud lists another service called “pigpiod”. This service and the “servos” service both use the same image in their docker container. What does the pigpiod service do? --Q2: Why does this service have a JSON publisher? Who does it send data to? - -5) Battery Management System (BMS) [The only service that does not run in a docker container] --Input: Receives data from the BMS inside the battery itself. --Function: The Pi uses an I2C Pi Hat to communicate with the special battery that the segment uses. From here, the Pi can give a “pulse” to the battery in order to “reset” it. This system also allows for seamless use of 2 or more of the same battery, on the same segment. This process is exclusively hardware based, so it is not running in a container. --Output: Sends data to the BMS inside the battery. - -Jetson Nano docker service descriptions: -1) HTTPD --Input: Listens for data requests from Teleop, Pilot, Stream1, Stream0. The sources are listed in the configuration file (called haproxy.conf) that the proxy server uses. --Function: This service sets up a proxy server (Intermediate server between the client and the actual HTTP server) using HAProxy, for load balancing and request forwarding. --Output: Forwards requests to the same services as above, but taking into account load balancing. The destinations are listed in the configuration files that the proxy server uses. - -2) Inference --Input 1: Receives stream from AI camera with the socket url being 'ipc:///byodr/camera_0.sock' --Input 2: Receives a route from Teleop’s Output 5 --Input 3: Receives the timestamp with a 'restart' command from Output 6 of Teleop --Function: This service is responsible for an interface for generating steering angles and making predictions based on images. These actions are based on a trained neural network model. If this service has input from Teleop, they override the self-driving directions of the model. -This service also initiates an IPC server with url 'ipc:///byodr/inference_c.sock', and a JSON publisher with url 'ipc:///byodr/inference.sock'. The IPC server has the ability to receive requests from other services, and respond based on the message type of the request. --Output 1: Sends the AI model’s current state (predicted action, obstacle avoidance, penalties, and other navigation-related information) to the local JSON Publisher. --Output 2:Creates list with any errors from this service. The list will be send to the Teleop service upon request. --Q1: How does Inference, Pilot and Teleop work together, if they work together? --Q2: How does Inference send its data to the Pi for proper movement? - -3) Zerotier --Input: Receives input from the user, using the buildin command line. --Function: The Nano can be added into a virtual network. --Output: Can communicate securely with nodes of the same network. - -4) WireGuard (Not used) --Input: Receives data from the Nano and the Router. --Function: Encrypts the data of the Nano. --Output: The data send by the Nano towards the internet are encrypted. - -5) Teleop --Input 1: Receives stream from the AI camera with the socket url being 'ipc:///byodr/camera_0.sock' --Input 2: Receives stream from Op camera with the socket url being 'ipc:///byodr/camera_1.sock' --Input 3: Receives a JSON from the Pilot service, that includes the proper action for the segment to take, depending on various factors. Potential actions are: - >Switch to manual driving mode - >Continue with the current manual mode commands that are being executed - >Continue with the current autopilot mode commands that are being executed - >Continue with the current backend(Carla simulation) mode commands that are being executed -Note, those commands could be old, repeated or empty commands (do nothing) --Input 4: Receives the current state (location, speed, timestamp and more) of the segment that is located in the Carla simulation, in a JSON format from the Vehicle service --Input 5: Receives the AI model’s current state in a JSON format from the Inference service’s Output 1. --Input 6: Receives input from the Operator’s method of control --Input 7: Receives lists of errors and/or capabilities from the following services from respective sockets: - >Pilot service via 'ipc:///byodr/pilot_c.sock' - >Inference service via 'ipc:///byodr/inference_c.sock' - >Vehicle service via ipc:///byodr/vehicle_c.sock' --Function: This service includes a web server that listens for inputs from multiple sources that are later used to move the robot. The key presses from the operator are registered and reflected upon the robot using this service. -This service includes a logger that logs information regarding the manual control of the robot. -In addition, there is a function in this server that encodes the streams from the cameras to MJPEG. -It also hosts the site design files necessary to draw the Web App. --Output 1: Robot movement according to user’s commands??? --Output 2: Live video feed on the web app --Output 3: MJPEG stream capability --Output 4: Logs and messages produced during operation. --Output 5: Publishes the segment’s selected route on its JSON Publisher. --Output 6: Publishes a timestamp with a 'restart' command on its JSON Publisher --Q1: How does “Teleop” send data to the Pi for robot movement? --Q2: What does it do with the route images? --Q3: From where does it receive user input from the controller/keyboard?(If the input is even sent here) --Q4: Does this service also communicate with the MongoDB service somehow? If yes, how? --Q5: What does it exactly receive from the Pilot service? - -6) Vehicle --Input 1: Receives a JSON from the Pilot service, that includes the proper action for the segment to take, depending on various factors. Potential actions are: - >Switch to manual driving mode - >Continue with the current manual mode commands that are being executed - >Continue with the current autopilot mode commands that are being executed - >Continue with the current backend(Carla simulation) mode commands that are being executed --Input 2: Receives the timestamp with a 'restart' command from Output 6 of Teleop --Input 3: Receives a route from Teleop’s Output 5 --Function: This process sets up a server that connects to a CARLA simulator and communicates with it to control a self-driving car. Carla is an open-source simulator for autonomous driving research. It is used to simulate the robot’s behavior in a virtual environment. -This service also sets up 2 HTTP servers that receive video streams from the simulated segment. --Output 1: Publishes the current state (location, speed, timestamp and more) of the segment that is located in the Carla simulation, in the JSON Publisher. --Output 2: Creates list with any errors and capabilities of the simulated segment. The list will be send to the Teleop service upon request. --Q1: Is this process exclusively used to send data to the CARLA simulator, and nothing else regarding the driving of the robot? --Q2: Where is the CARLA simulation hosted? --Q3: How does this service connect to the simulation server? - -7) ROS Node --Input 1: Receives a JSON from the Pilot service, that includes the proper action for the segment to take, depending on various factors. Potential actions are: - >Switch to manual driving mode - >Continue with the current manual mode commands that are being executed - >Continue with the current autopilot mode commands that are being executed - >Continue with the current backend(Carla simulation) mode commands that are being executed --Input 2: Receives the timestamp with a 'restart' command from Output 6 of Teleop --Function: This service defines a ROS2 node which connects to a teleop node and a pilot node, and switches the driving mode to Autopilot or Manual, depending on user input. It also sets a max speed for the segment. --Output: Sends “m” to the Pilot service --Q1: Why exactly do we need this service? --Q2: Does the communication with other services imply the existence of multiple nodes? --Q3: Why does it publish json data only to the Pilot, and not to both Pilot and Teleop? --Q4: What is “m” and what does it include? - -8) Pilot --Input 1: Receives a route from Teleop’s Output 5 --Input 2: Receives “m” from Rosnode’s Output 1 --Input 3: Receives the current state (location, speed, timestamp and more) of the segment that is located in the Carla simulation from Vehicle’s Output 1. --Input 4: Receives the AI model’s current state (predicted action, obstacle avoidance, penalties, and other navigation-related information) from Inference’s Output 1 --Input 5: Receives the timestamp with a 'restart' command from Output 6 of Teleop --Input 6: Receives user input (????????????) --Function: This process sets up a JSON publisher and a local IPC server to send data to other services that have JSON collectors. It also is responsible for controlling the segment’s movement by using a pre-trained AI model, or using user input, or driving the segment inside the simulation. --Output 1: Publishes a JSON to its JSON Publisher that includes the proper action for the segment to take, depending on various factors. Potential actions are: - >Switch to manual driving mode - >Continue with the current manual mode commands that are being executed - >Continue with the current autopilot mode commands that are being executed - >Continue with the current backend(Carla simulation) mode commands that are being executed --Output 2: Creates list with any errors from this service. The list will be send to the Teleop service upon request. --Q1: This process is exclusively used by the robot for its autonomous driving? --Q2: How does this service cooperate with “Inference”? --Q3: Why does this service start an HTTP server? --Q4: How does this service send movement commands to the Pi? (If it does) - -9) MongoDB --Input: Receives data from the segment, and stores any and all logs produced by the other services (?) --Function: This service creates a default MongoDB user and starts a configurable MongoDB server on the local machine. --Output: Stores logs in its builtin database --Q1: What does the DB store inside it? --Q2: How does the DB get the data that it stores? - -10) FTPD -Input: Receives the newly trained model from the training server. --Function: This service creates a Pure-FTPd Server with a predefined set of commands. This server is used to send its training data to the server, and similarly, receive the trained model from the server. --Output: Sends data to the AI training server with parameters for its specific training. --Q1: Is this the code that connects the Nano to the Firezilla FTP server? (Mentioned in the readthedocs) --Q2: How does this ftp server store, send and receive data from the training server? - - -General questions -Q1: How json receivers and publishers work? Because this is used to send data one to another. -Q2: If all segments are in a zerotier network, does any data sent between the segments encrypted? -Q3: Why do we docker for this project? -Q4: Where is the watchdog function of the Pi/Nano? diff --git a/docs/about.md b/docs/about.md deleted file mode 100644 index d4517721..00000000 --- a/docs/about.md +++ /dev/null @@ -1,7 +0,0 @@ -# About - -### MWLC - -For more information please visit [www.mwlc.global](http://www.mwlc.global). - -Thank you. \ No newline at end of file diff --git a/docs/autopilot_manual.md b/docs/autopilot_manual.md deleted file mode 100644 index 115fd2a5..00000000 --- a/docs/autopilot_manual.md +++ /dev/null @@ -1,279 +0,0 @@ -# Training the Autopilot -## Introduction -The main function of the robot is driving routes on a routine base, carrying small goods, such as packages or inspection equipment. -As there is no driver on board, the vehicle can be smaller, cheaper and will consume less energy. -Without a driver on board there is a need for remote supervision by an operator or traffic controller. - -The robot has two modes to give an operator and/or traffic controller supervision over the vehicle and to save time: -1. Teleoperation; for remote control of the robot. The teleoperation mode is explained in manual [Controller and Browser](operator_manual.md). -2. The autopilot; to train the robot and have the robot driving autonomously. To have the robot drive autonomous, the autopilot has to be trained. - - - -## Autopilot general behaviour -### Virtual tracks within a corridor -![](img/autopilot/corridor_average.png) -By training the autopilot the model will learn to project virtual tracks (blue) on the route. -The collection of tracks creates a corridor (orange) on the route. Within the corridor the robot -knows where to go. The software will try to keep the robot on the average of all tracks in the corridor, the preferred -path (green). The corridor is defined by the envelope of virtual tracks. - -If the autopilot recognises a corridor it will try to follow the preferred path. As soon it does not recognise this anymore, -the robot will stop. Recognition of the corridor by the robot can be also be lost if circumstances change. Think of different -light conditions, shades, other furniture, extra cars, leaves, etc. all can cause a lack of recognition. -Lack of recognition can be solved by additional training of the robot. -### Free passage -The robot will recognise if it has free passage on the corridor. This may not work 100% reliably, for instance when a person -jumps in front of the robot to see if it will stop. If you are using the robot in autopilot take care that no unauthorised persons -are near. The robot should be accompanied by a steward who can stop the robot by pressing the red button -or by moving it by force. -### No ambiguities, no navigation instructions -Any route can be trained without navigation instructions, as long as there is no situation that creates an ambiguity for the robot. The robot will drive as it 'as -learned'. -When it encounters an ambiguity the robot cannot decide what to do. -This means that if the robot drives 'as learned' the position and orientation of the robot cannot be the same for two different preferred paths. -#### Example A -The robot can be taught to go left at a certain point. -Or (exclusive) the robot can be taught to go right at that same point. -![](img/autopilot/two_junctions.png) -However when completing a route it cannot learn to go right at one moment and left at another moment. -![](img/autopilot/ambiguous_corridor.png) -#### Example B -The robot can be taught to cross another route. On the marked spot the position of the vehicle is the same but the orientation -is different. -![](img/autopilot/corridor_crossing.png) -This means that routes can merge. Before merging the orientation is different. -![](img/autopilot/corridor_merge.png) -#### Example C -The robot can learn to complete complex routes as long as position and orientation of the robot are not the same for different preferred -paths. -![](img/autopilot/labyrinth.png) - -In this situation there is no difference in position and orientation for the short section of the route that is marked. -Depending on the width of the corridors and how the relevant right and left turns are executed, this situation is likely to be -ambiguous for the robot. -![](img/autopilot/labyrinth2.png) -#### Stopping -Besides stopping in the case of ambiguities the robot will stop if it does not recognise the situation. -![](img/autopilot/uncertain.png) - -The robot will stop if there is no free passage. -![](img/autopilot/blocked_passage.png) -## Your own route using autopilot -Training a new route is done in steps. For a simple route it wil take less than a day collecting data. -More complex routes will take more time. -To train a route follow these steps: -1. Define the route -2. Send route information to MWLC -3. Have the operator driving the route in the tele-operation mode to assure he or she can drive the route smoothly. The autopilot learns to drive the route in the same way as the operator. The autopilot can never do it better as de operator. -4. Ensure that the robot is accompanied by a person who has received safety instructions, the steward. -5. Complete the route 5-10 times in autopilot mode in both directions. -6. Send start and end times of your training sessions to MWLC. -7. Delete the data which shouldn't be used for the training with a ftp-client. -8. Wait for the model update to complete - this may take several days to weeks -9. Repeat the training steps 4-8, until satisfied -As long as you are not satisfied with the route you can improve it by collecting additional data. -To handle ambiguities you need extra pictures and file-management on the robot. This wil be explained under 'Ambiguities and halts' -#### Define the route -* Make your first route a simple route. -* Start and finish at the same point. -* An outside route is more simple than an inside route. -* The model needs visual clues. Visual differences along the route make it easier to learn. -* No features at all - e.g. empty space - makes it impossible to learn. -* A route along a long path is easier to learn than a short route with many non-typical turns. -* Make sure the route can be driven in clockwise and counter clockwise directions. -* Avoid ambiguities. -#### Inform MWLC cloud-management -Make a map of the route: -* For an inside environment: a map or drawing -* For an outside environment: a map or satellite image -Collect information about the intensity of other traffic on the route as a function of time. -Contact Cloud-management at MWLC with the information. -Cloud-management will make an estimate of the amount of training runs required and at estimate which sections of the route require -more intensive training. -#### Learn the operator to drive smoothly -All interventions in the autopilot mode, done by the operator are seen as data for training and will be integrated in de model. -To avoid unwanted behaviour of the robot the operator should be able to drive the route smooth, without aan unnecessary turns of breaks. -Before training the robot, the operator should drive the route completely to see if unwanted manoeuvres has to be made. -#### Collect data -#####First training -Steps: -- Check the last update of the [Controller and Browser](operator_manual.md) manual. -- Have a steward accompany the robot, out of sight of the forward facing camera. -- Place the robot at the start of the route. This can be done with teleoperation. -- Make a note of the start time of the training. -- Set the robot to autopilot mode. -- In the first trainingssession set the max speed zero, to avoid unplanned accelerations. -![](img/autopilot/max_speed_zero.jpg) -- Start driving your robot. -- Drive slowly to collect more visuals. Per meter between 20 and 200 visuals, under different circumstances, are necessary. -- While driving, the steering wheel may swap between red and blue. Blue means that the autopilot recognise a corridor and free passage. Ignore this during the first training. -- Drive the route 5 to 10 times clockwise and 5 to 10 times counter clockwise. -- If finished, make a note of the end time of the training and leave the robot switched on. -- Inform Cloud-management that you have collected data and during which time windows. -- Cloud-management will analyse the data to see if and how it can be used. -#####Additional training -The autopilot needs several training sessions. Especially in complex situations or if the circumstances have been changed. -And, outside the circumstances will chances. Clouds, rain, sun from the east, sun from de west, etc. -After the first model update you can check if the autopilot has a better understanding of the route. -We expect the steering wheel in the browser to turn blue more often than -before the first and previous data collection runs. -For additional trainings you only have to train the parts of the route where de robot doesn't do well. -Sometimes the hole route as to be trained just like the first training. Most of the time it are interventions on the route the autopilot chooses. -Differences for training with interventions: -- Set the max speed of the autopilot on the desired speed. -- As long as the steering wheel is blue and robot drives as the operator want it, no intervention is necessary. -- If the steering wheel is red or the direction isn't right, the operator has to override the autopilot. -- If the speed is according to the requirements, but the direction isn't, use the steering option for the right direction -- If the direction is according to the requirements, but the speed isn't, use the acceleration or break options for the right speed and use the steering option to keep de robot in the right direction. If the speed is overruled the direction should be done manually. - -Advise: -Don't make the intervention to short. If you do an intervention do this at least until the end of the turn. (keep as minimum 10 seconds.) -The corridor is defined by data from the accepted routes driven in training. To broaden the corridor the operator can steer -the robot a bit further away from the preferred path. -It is possible to train the robot to go back to the center of the corridor. Take the robot, without autopilot on, just outside -the corridor. Put the robot in autopilot mode and steer it back to the preferred path. - -####Delete unwanted trainingdata -If the operator is not satisfied with, parts of, the training, it's possible to delete this from the storage. -For deleting training data you can use a ftp-client. In this explanation [Filezilla](https://filezilla-project.org) for MacOS is used. -#####Filezilla -- Download and install Filezilla -- Under Filezilla/settings..., set Transfer Mode to Active under optoin Connection/FTP -- Set Host: to ipno of the robot in ZeroTier. -- Set Username: to the ftp-login-id as mentioned in your credentials document. -- Set Password: to the ftp-login-password as mentioned in your credentials document. -- Connect -Under Remote site the folder structure of the robot will appear. -#####Structure of the data on the robot -![](img/autopilot/filezilla_robot_side.jpg) -autopilot - userdata, training and intervention sessions in zip format. -models - systemdata, the ai-models downloaded by the robot from the MWLC-cloud. -photos - userdata, pictures taken by the operator by pressing the left button n the controller. -routes - userdata, routes managed by the operator -To manage the training and intervention data only the autopilot data are relevant. -Photos and routes will be explained under ambiguities and halts. -Don't change anything in the systemdata. -#####Delete training-data from the storage on the robot -Go in the folder autopilot to the folder of the month, via the year-folder, you want to manage the data. -In the folder the zip files are named by the day and time of the moment of the first image in the folder. -If you want to see what's in the zipfile, don't hesitate to download it to you own computer. -Delete the unwanted training-sessions. -Modifications in the training data are synchronised with the MWLC-cloud until 3 months after the are collected. -####Model-update -To use data in a model the data has to be reviewed and annotated. -This will be done by Cloud-management. -Data will be added to the database with annotation. -If all data is reviewed a model update will be carried out. -After some time the model in the robot will be automatically updated. -If necessary the need for additional data collection will be highlighted or further usage instructions will be provided. -The procces of colleting datafrom the robot, reviewing, upgrading the models and dowloading the model back to the robot will take 5 days to 2 weeks. - -> Please note: leave the robot switched on during updates and also while charging. - -## Ambiguities, halts and commands -The robot can be asked to recognise a location as a navigationpoint. This can be done by putting some images of that location in a folder on the robot. -On that location the robot can be told to execute a predefined json-command of drive in a certain the direction. -The first option is used to have the robot stopping for sometime, increase or decrease the power to the engine. -With the second option ambiguities can be solved. - -![](img/autopilot/navigationpoint_choice.jpg) - - -An ambiguity is a location in one or more routes where the robot, having the same orientation has more preferred paths. To solve this ambiguity, just ad some images of the direction where te robot has to go. -Navigationpoints are part of a route. -To use routes with navigationpoints and instruction you have to make, with a ftp-client: -- in the folder 'routes' a folder with a recognisable name of a route. -- in the route-folder a folder with a recognisable name of a navigationpoint -- in the navigationpoint folder the images of the navigationpoint has to be placed. -- if an instruction is desired put the json-command in the folder of the navigationpoint. -- if more navigationpoints are use in a route, the should be sorted to the order of driving. - -The have the robot driving a route you can select it via the browser. Browse through the routes on the robot in the browser window at the top right, with the '<' and '>' buttons. The name of the active route appears between the buttons. -To show this mechanism find an example hereunder. -###Example route with navigationpoints -Behind our office there is our test track in the garden. -Our duration test robot drove the track around 40thousand times. -We want to complicate this by driving around an obstacle. Sometimes keeping it left, sometimes right. -It should increase the max. power to the engine before the ramp-up and decrease after the ramp-up. -And the robot, Baxter, should sometimes stop just before the ramp-down to contemplate a few seconds about the meaning of live. -We make two routes: -mb3_left, keeping the obstacle at the left and stopping a moment before going the ramp-down. -mb4_right, keeping the obstacle right and increasing speed at the ramp-up. -We wil use several navigation points and json-commands. -####Make routes in the datastorage on the robot -First, decide about the position and orientation where the robot has to recognise a navigationpoint. -Take care that there are enough visual clues so the robot can recognise the location. -Assure you have a ftp-client to connect with de datastorage on the robot. -In the folder 'routes', make to new folders, 'mb3_left' and 'mb4_right'. - - ![](img/autopilot/filezilla_newroutes.jpg) - -In the new route-folder 'mb3_left' we make navigationpoint-folders: -- '20_keepleft', with images of keeping the obstacle left. -- '30_startrampup', with images of where to increase power before the ramp-up. -- '50_endrampup', with images of where to decrease after the ramp-up. -- '80_stopbeforedown', with images of the place where to stop and drive ramp-down slowly. -- '90_fasterafterdown', with images of the place where to speed up. - -In the new route-folder 'mb4_rigt' we make navigationpoint-folders: -- '20_keepright', with images of keeping the obstacle left. -- '30_startrampup', with images of where to increase power before the ramp-up. -- '50_endrampup', with images of where to decrease after the ramp-up. - -The algorithms expect the navigationpoints in order of driving. Therefore, we start the name of the navigationpoint-folder with a number related to the position in the route. - -####Make images and place them in the correct folder -Drive the robot to the navigationpoint. -Make an image (Left button, see manual 'Controller and Browser'). -The image appears in the folder /photos/cam0/eeyymmmmm. -Give a refresh if it's not there yet. -To check if the image is correct, drag the image to your computer. -If it's good, drag the image to the navigationpoint folder. -Make three images from slightly different angels and later also under different circumstances. - -![](img/autopilot/filezilla_navigationpointfolder.jpg) - -![](img/autopilot/navigationpointpictures.jpg) - - -Make images for al navigationpoints and drag them to the navigationpoint-folder. -#####Test the navigationpoints -If you want you can check if your navigation are working. -Go to the browser window of the robot. -In the top right you see the route selector en navigationpoint status: - -![](img/autopilot/browser_route_selector1.jpg) - -Click the '<' or '>' to select a route. - -By moving your cursor over the small image window you get te option to start: -![](img/autopilot/browser_route_start.jpg) - -You can stop a route by moving your cursor over the small image window when a route is running: -![](img/autopilot/browser_route_pause.jpg) -For caning a route you have to stop the old route, choose another en start the new route. - -Start driving the robot in autopilot. - -If the robot recognises a navigationpoint, this will appear in the little window. - -![](img/autopilot/browser_navigationpoint_recognised1.jpg) - -####Add json-command -In the naviagationpoint-folder '/mb4_right/30_startrampup' we add a file with the name 'command.json' with content: -![](img/autopilot/json_30_startrampup.jpg) -This is just a single command. - -In the naviagationpoint-folder '/mb3_left/80_stopbeforedown' we add a file with the name 'command.json' with content: -![](img/autopilot/json_80_stopbeforedown.jpg) -This is a more complex command. First reduce speed to zero, the wait 5 seconds and the set speed to 1,5. - -With these commands there is an increase of power before the ramp up and a decrease of power after the ramp-up. -For the ramp-down the opposite is done, first a stop and wait 5 seconds, then drive slowly to and off the ramp-down. After the ramp down the regular speed is set. - - -Test if you routes are working. - -## Using the model -Never leave the robot unattended. \ No newline at end of file diff --git a/docs/img/autopilot/ambiguous_corridor.png b/docs/img/autopilot/ambiguous_corridor.png deleted file mode 100644 index d196c268..00000000 Binary files a/docs/img/autopilot/ambiguous_corridor.png and /dev/null differ diff --git a/docs/img/autopilot/blocked_passage.png b/docs/img/autopilot/blocked_passage.png deleted file mode 100644 index 20c39f09..00000000 Binary files a/docs/img/autopilot/blocked_passage.png and /dev/null differ diff --git a/docs/img/autopilot/browser_navigationpoint_recognised1.jpg b/docs/img/autopilot/browser_navigationpoint_recognised1.jpg deleted file mode 100644 index 9205c8f6..00000000 Binary files a/docs/img/autopilot/browser_navigationpoint_recognised1.jpg and /dev/null differ diff --git a/docs/img/autopilot/browser_route_pause.jpg b/docs/img/autopilot/browser_route_pause.jpg deleted file mode 100644 index e16079a8..00000000 Binary files a/docs/img/autopilot/browser_route_pause.jpg and /dev/null differ diff --git a/docs/img/autopilot/browser_route_selector1.jpg b/docs/img/autopilot/browser_route_selector1.jpg deleted file mode 100644 index 5b47eaab..00000000 Binary files a/docs/img/autopilot/browser_route_selector1.jpg and /dev/null differ diff --git a/docs/img/autopilot/browser_route_start.jpg b/docs/img/autopilot/browser_route_start.jpg deleted file mode 100644 index 8225fd68..00000000 Binary files a/docs/img/autopilot/browser_route_start.jpg and /dev/null differ diff --git a/docs/img/autopilot/corridor_average.png b/docs/img/autopilot/corridor_average.png deleted file mode 100644 index 5106960b..00000000 Binary files a/docs/img/autopilot/corridor_average.png and /dev/null differ diff --git a/docs/img/autopilot/corridor_crossing.png b/docs/img/autopilot/corridor_crossing.png deleted file mode 100644 index 65d8c610..00000000 Binary files a/docs/img/autopilot/corridor_crossing.png and /dev/null differ diff --git a/docs/img/autopilot/corridor_merge.png b/docs/img/autopilot/corridor_merge.png deleted file mode 100644 index 848f8260..00000000 Binary files a/docs/img/autopilot/corridor_merge.png and /dev/null differ diff --git a/docs/img/autopilot/double_crossing.png b/docs/img/autopilot/double_crossing.png deleted file mode 100644 index cb09dc94..00000000 Binary files a/docs/img/autopilot/double_crossing.png and /dev/null differ diff --git a/docs/img/autopilot/filezilla_navigationpointfolder.jpg b/docs/img/autopilot/filezilla_navigationpointfolder.jpg deleted file mode 100644 index ecf80367..00000000 Binary files a/docs/img/autopilot/filezilla_navigationpointfolder.jpg and /dev/null differ diff --git a/docs/img/autopilot/filezilla_newroutes.jpg b/docs/img/autopilot/filezilla_newroutes.jpg deleted file mode 100644 index 2cd31e7d..00000000 Binary files a/docs/img/autopilot/filezilla_newroutes.jpg and /dev/null differ diff --git a/docs/img/autopilot/filezilla_robot_side.jpg b/docs/img/autopilot/filezilla_robot_side.jpg deleted file mode 100644 index 538fdfc1..00000000 Binary files a/docs/img/autopilot/filezilla_robot_side.jpg and /dev/null differ diff --git a/docs/img/autopilot/json_30_startrampup.jpg b/docs/img/autopilot/json_30_startrampup.jpg deleted file mode 100644 index c6a6ad4f..00000000 Binary files a/docs/img/autopilot/json_30_startrampup.jpg and /dev/null differ diff --git a/docs/img/autopilot/json_50_endrampup.jpg b/docs/img/autopilot/json_50_endrampup.jpg deleted file mode 100644 index a6056c39..00000000 Binary files a/docs/img/autopilot/json_50_endrampup.jpg and /dev/null differ diff --git a/docs/img/autopilot/json_80_stopbeforedown.jpg b/docs/img/autopilot/json_80_stopbeforedown.jpg deleted file mode 100644 index ae48e430..00000000 Binary files a/docs/img/autopilot/json_80_stopbeforedown.jpg and /dev/null differ diff --git a/docs/img/autopilot/labyrinth.png b/docs/img/autopilot/labyrinth.png deleted file mode 100644 index af2e46f2..00000000 Binary files a/docs/img/autopilot/labyrinth.png and /dev/null differ diff --git a/docs/img/autopilot/labyrinth2.png b/docs/img/autopilot/labyrinth2.png deleted file mode 100644 index 942e2aef..00000000 Binary files a/docs/img/autopilot/labyrinth2.png and /dev/null differ diff --git a/docs/img/autopilot/max_speed_zero.jpg b/docs/img/autopilot/max_speed_zero.jpg deleted file mode 100644 index 9f686c7e..00000000 Binary files a/docs/img/autopilot/max_speed_zero.jpg and /dev/null differ diff --git a/docs/img/autopilot/navigationpoint_choice.jpg b/docs/img/autopilot/navigationpoint_choice.jpg deleted file mode 100644 index d430390c..00000000 Binary files a/docs/img/autopilot/navigationpoint_choice.jpg and /dev/null differ diff --git a/docs/img/autopilot/navigationpointpictures.jpg b/docs/img/autopilot/navigationpointpictures.jpg deleted file mode 100644 index 8fad0dba..00000000 Binary files a/docs/img/autopilot/navigationpointpictures.jpg and /dev/null differ diff --git a/docs/img/autopilot/ros2_node_name_setting.jpg b/docs/img/autopilot/ros2_node_name_setting.jpg deleted file mode 100644 index 876225ab..00000000 Binary files a/docs/img/autopilot/ros2_node_name_setting.jpg and /dev/null differ diff --git a/docs/img/autopilot/rover_front.jpg b/docs/img/autopilot/rover_front.jpg deleted file mode 100644 index 2108f416..00000000 Binary files a/docs/img/autopilot/rover_front.jpg and /dev/null differ diff --git a/docs/img/autopilot/rover_frontOud.jpg b/docs/img/autopilot/rover_frontOud.jpg deleted file mode 100644 index 9046c7d0..00000000 Binary files a/docs/img/autopilot/rover_frontOud.jpg and /dev/null differ diff --git a/docs/img/autopilot/two_junctions.png b/docs/img/autopilot/two_junctions.png deleted file mode 100644 index 22420018..00000000 Binary files a/docs/img/autopilot/two_junctions.png and /dev/null differ diff --git a/docs/img/autopilot/uncertain.png b/docs/img/autopilot/uncertain.png deleted file mode 100644 index 81397ae0..00000000 Binary files a/docs/img/autopilot/uncertain.png and /dev/null differ diff --git a/docs/img/controller/cert_20lockiconclicked.jpg b/docs/img/controller/cert_20lockiconclicked.jpg deleted file mode 100644 index bc6a346d..00000000 Binary files a/docs/img/controller/cert_20lockiconclicked.jpg and /dev/null differ diff --git a/docs/img/controller/cert_25connectionnotsecureddetailsclicked.jpg b/docs/img/controller/cert_25connectionnotsecureddetailsclicked.jpg deleted file mode 100644 index 46ad619d..00000000 Binary files a/docs/img/controller/cert_25connectionnotsecureddetailsclicked.jpg and /dev/null differ diff --git a/docs/img/controller/cert_30moreinformationclicked.jpg b/docs/img/controller/cert_30moreinformationclicked.jpg deleted file mode 100644 index a8595500..00000000 Binary files a/docs/img/controller/cert_30moreinformationclicked.jpg and /dev/null differ diff --git a/docs/img/controller/cert_35viewcertificateclicked.jpg b/docs/img/controller/cert_35viewcertificateclicked.jpg deleted file mode 100644 index 99704766..00000000 Binary files a/docs/img/controller/cert_35viewcertificateclicked.jpg and /dev/null differ diff --git a/docs/img/controller/cert_40downloadcertificateclicked.jpg b/docs/img/controller/cert_40downloadcertificateclicked.jpg deleted file mode 100644 index ed4a0501..00000000 Binary files a/docs/img/controller/cert_40downloadcertificateclicked.jpg and /dev/null differ diff --git a/docs/img/controller/cert_50certificateopeninkeychain.jpg b/docs/img/controller/cert_50certificateopeninkeychain.jpg deleted file mode 100644 index e4617eac..00000000 Binary files a/docs/img/controller/cert_50certificateopeninkeychain.jpg and /dev/null differ diff --git a/docs/img/controller/cert_55trustopen.jpg b/docs/img/controller/cert_55trustopen.jpg deleted file mode 100644 index 6dd8f0d3..00000000 Binary files a/docs/img/controller/cert_55trustopen.jpg and /dev/null differ diff --git a/docs/img/controller/dash_center_autopilot.png b/docs/img/controller/dash_center_autopilot.png deleted file mode 100644 index 2392e0b6..00000000 Binary files a/docs/img/controller/dash_center_autopilot.png and /dev/null differ diff --git a/docs/img/controller/dash_center_black.png b/docs/img/controller/dash_center_black.png deleted file mode 100644 index 94f6c103..00000000 Binary files a/docs/img/controller/dash_center_black.png and /dev/null differ diff --git a/docs/img/controller/dash_center_blue.png b/docs/img/controller/dash_center_blue.png deleted file mode 100644 index 7abca9f6..00000000 Binary files a/docs/img/controller/dash_center_blue.png and /dev/null differ diff --git a/docs/img/controller/dash_center_red.png b/docs/img/controller/dash_center_red.png deleted file mode 100644 index 2bc93189..00000000 Binary files a/docs/img/controller/dash_center_red.png and /dev/null differ diff --git a/docs/img/controller/dash_center_teleop.png b/docs/img/controller/dash_center_teleop.png deleted file mode 100644 index 852a5ce5..00000000 Binary files a/docs/img/controller/dash_center_teleop.png and /dev/null differ diff --git a/docs/img/controller/msg_another_user_in_control.png b/docs/img/controller/msg_another_user_in_control.png deleted file mode 100644 index ea22cc33..00000000 Binary files a/docs/img/controller/msg_another_user_in_control.png and /dev/null differ diff --git a/docs/img/controller/msg_connection_lost.png b/docs/img/controller/msg_connection_lost.png deleted file mode 100644 index 45ffd86d..00000000 Binary files a/docs/img/controller/msg_connection_lost.png and /dev/null differ diff --git a/docs/img/controller/msg_controller_not_detected.png b/docs/img/controller/msg_controller_not_detected.png deleted file mode 100644 index c98e418a..00000000 Binary files a/docs/img/controller/msg_controller_not_detected.png and /dev/null differ diff --git a/docs/img/controller/ps4_autopilot.jpg b/docs/img/controller/ps4_autopilot.jpg deleted file mode 100644 index 643a5523..00000000 Binary files a/docs/img/controller/ps4_autopilot.jpg and /dev/null differ diff --git a/docs/img/controller/ps4_autopilot_overrule.jpg b/docs/img/controller/ps4_autopilot_overrule.jpg deleted file mode 100644 index b241d13a..00000000 Binary files a/docs/img/controller/ps4_autopilot_overrule.jpg and /dev/null differ diff --git a/docs/img/controller/ps4_camera.jpg b/docs/img/controller/ps4_camera.jpg deleted file mode 100644 index 0f076889..00000000 Binary files a/docs/img/controller/ps4_camera.jpg and /dev/null differ diff --git a/docs/img/controller/ps4_cameraOud.jpg b/docs/img/controller/ps4_cameraOud.jpg deleted file mode 100644 index 25ac2821..00000000 Binary files a/docs/img/controller/ps4_cameraOud.jpg and /dev/null differ diff --git a/docs/img/controller/ps4_forward_backward.jpg b/docs/img/controller/ps4_forward_backward.jpg deleted file mode 100644 index 0f2c468a..00000000 Binary files a/docs/img/controller/ps4_forward_backward.jpg and /dev/null differ diff --git a/docs/img/controller/ps4_forward_backwardOud.jpg b/docs/img/controller/ps4_forward_backwardOud.jpg deleted file mode 100644 index 951a0d8a..00000000 Binary files a/docs/img/controller/ps4_forward_backwardOud.jpg and /dev/null differ diff --git a/docs/img/controller/ps4_overview.jpg b/docs/img/controller/ps4_overview.jpg deleted file mode 100644 index 1fd2a674..00000000 Binary files a/docs/img/controller/ps4_overview.jpg and /dev/null differ diff --git a/docs/img/controller/ps4_reverse_gear.jpg b/docs/img/controller/ps4_reverse_gear.jpg deleted file mode 100644 index 2e07268e..00000000 Binary files a/docs/img/controller/ps4_reverse_gear.jpg and /dev/null differ diff --git a/docs/img/controller/ps4_teleop.jpg b/docs/img/controller/ps4_teleop.jpg deleted file mode 100644 index 077e47ec..00000000 Binary files a/docs/img/controller/ps4_teleop.jpg and /dev/null differ diff --git a/docs/img/controller/teleop_feedback.jpg b/docs/img/controller/teleop_feedback.jpg deleted file mode 100644 index f2d5994c..00000000 Binary files a/docs/img/controller/teleop_feedback.jpg and /dev/null differ diff --git a/docs/img/controller/turn_left.png b/docs/img/controller/turn_left.png deleted file mode 100644 index e78c0a25..00000000 Binary files a/docs/img/controller/turn_left.png and /dev/null differ diff --git a/docs/img/controller/turn_right.png b/docs/img/controller/turn_right.png deleted file mode 100644 index 33eb247a..00000000 Binary files a/docs/img/controller/turn_right.png and /dev/null differ diff --git a/docs/img/controller/turn_straight.png b/docs/img/controller/turn_straight.png deleted file mode 100644 index 40257479..00000000 Binary files a/docs/img/controller/turn_straight.png and /dev/null differ diff --git a/docs/img/controller/ui_menu_options.png b/docs/img/controller/ui_menu_options.png deleted file mode 100644 index 779e8ae0..00000000 Binary files a/docs/img/controller/ui_menu_options.png and /dev/null differ diff --git a/docs/img/controller/ui_overview.jpg b/docs/img/controller/ui_overview.jpg deleted file mode 100644 index 9b211d9a..00000000 Binary files a/docs/img/controller/ui_overview.jpg and /dev/null differ diff --git a/docs/img/controller/ui_overviewOud.jpg b/docs/img/controller/ui_overviewOud.jpg deleted file mode 100644 index 552630b0..00000000 Binary files a/docs/img/controller/ui_overviewOud.jpg and /dev/null differ diff --git a/docs/img/controller/wheel_black.png b/docs/img/controller/wheel_black.png deleted file mode 100644 index e16a2cf9..00000000 Binary files a/docs/img/controller/wheel_black.png and /dev/null differ diff --git a/docs/img/controller/wheel_blue_speed.png b/docs/img/controller/wheel_blue_speed.png deleted file mode 100644 index 312bccdf..00000000 Binary files a/docs/img/controller/wheel_blue_speed.png and /dev/null differ diff --git a/docs/img/controller/wheel_red.png b/docs/img/controller/wheel_red.png deleted file mode 100644 index 4c8170cc..00000000 Binary files a/docs/img/controller/wheel_red.png and /dev/null differ diff --git a/docs/img/controller/wheel_red_speed.png b/docs/img/controller/wheel_red_speed.png deleted file mode 100644 index 3a281cc9..00000000 Binary files a/docs/img/controller/wheel_red_speed.png and /dev/null differ diff --git a/docs/img/controller/xbox_white.png b/docs/img/controller/xbox_white.png deleted file mode 100644 index 8277ae5d..00000000 Binary files a/docs/img/controller/xbox_white.png and /dev/null differ diff --git a/docs/img/drawings/ZZ_Archief/img204400v09.jpg b/docs/img/drawings/ZZ_Archief/img204400v09.jpg deleted file mode 100644 index a0e76cfd..00000000 Binary files a/docs/img/drawings/ZZ_Archief/img204400v09.jpg and /dev/null differ diff --git a/docs/img/drawings/ZZ_Archief/img206000P1v018.jpg b/docs/img/drawings/ZZ_Archief/img206000P1v018.jpg deleted file mode 100644 index 0dd18844..00000000 Binary files a/docs/img/drawings/ZZ_Archief/img206000P1v018.jpg and /dev/null differ diff --git a/docs/img/drawings/ZZ_Archief/img206000P1v019.jpg b/docs/img/drawings/ZZ_Archief/img206000P1v019.jpg deleted file mode 100644 index 50503949..00000000 Binary files a/docs/img/drawings/ZZ_Archief/img206000P1v019.jpg and /dev/null differ diff --git a/docs/img/drawings/ZZ_Archief/img206000P2v018.jpg b/docs/img/drawings/ZZ_Archief/img206000P2v018.jpg deleted file mode 100644 index 7450ef6b..00000000 Binary files a/docs/img/drawings/ZZ_Archief/img206000P2v018.jpg and /dev/null differ diff --git a/docs/img/drawings/ZZ_Archief/img206000P2v019.jpg b/docs/img/drawings/ZZ_Archief/img206000P2v019.jpg deleted file mode 100644 index 1b636467..00000000 Binary files a/docs/img/drawings/ZZ_Archief/img206000P2v019.jpg and /dev/null differ diff --git a/docs/img/drawings/img204400v011.jpg b/docs/img/drawings/img204400v011.jpg deleted file mode 100644 index 3cb6538e..00000000 Binary files a/docs/img/drawings/img204400v011.jpg and /dev/null differ diff --git a/docs/img/drawings/img206012v021.jpg b/docs/img/drawings/img206012v021.jpg deleted file mode 100644 index 991f6872..00000000 Binary files a/docs/img/drawings/img206012v021.jpg and /dev/null differ diff --git a/docs/img/drawings/img206020v022.jpg b/docs/img/drawings/img206020v022.jpg deleted file mode 100644 index 8aa6f184..00000000 Binary files a/docs/img/drawings/img206020v022.jpg and /dev/null differ diff --git a/docs/img/drawings/img208000v07.jpg b/docs/img/drawings/img208000v07.jpg deleted file mode 100644 index 3d037453..00000000 Binary files a/docs/img/drawings/img208000v07.jpg and /dev/null differ diff --git a/docs/img/index/rover_garden.jpg b/docs/img/index/rover_garden.jpg deleted file mode 100644 index d3a2aee0..00000000 Binary files a/docs/img/index/rover_garden.jpg and /dev/null differ diff --git a/docs/img/instructions/img204110.jpg b/docs/img/instructions/img204110.jpg deleted file mode 100644 index be7e6288..00000000 Binary files a/docs/img/instructions/img204110.jpg and /dev/null differ diff --git a/docs/img/instructions/img204120.jpg b/docs/img/instructions/img204120.jpg deleted file mode 100644 index 4e9700b1..00000000 Binary files a/docs/img/instructions/img204120.jpg and /dev/null differ diff --git a/docs/img/instructions/img204215.jpg b/docs/img/instructions/img204215.jpg deleted file mode 100644 index d2a0ec45..00000000 Binary files a/docs/img/instructions/img204215.jpg and /dev/null differ diff --git a/docs/img/instructions/img204405.jpg b/docs/img/instructions/img204405.jpg deleted file mode 100644 index 2faff37a..00000000 Binary files a/docs/img/instructions/img204405.jpg and /dev/null differ diff --git a/docs/img/instructions/img204415.jpg b/docs/img/instructions/img204415.jpg deleted file mode 100644 index e744e2c8..00000000 Binary files a/docs/img/instructions/img204415.jpg and /dev/null differ diff --git a/docs/img/instructions/img204425.jpg b/docs/img/instructions/img204425.jpg deleted file mode 100644 index c6ed06ca..00000000 Binary files a/docs/img/instructions/img204425.jpg and /dev/null differ diff --git a/docs/img/instructions/img206120.jpg b/docs/img/instructions/img206120.jpg deleted file mode 100644 index a1717ef7..00000000 Binary files a/docs/img/instructions/img206120.jpg and /dev/null differ diff --git a/docs/img/instructions/img206230.jpg b/docs/img/instructions/img206230.jpg deleted file mode 100644 index 192946c9..00000000 Binary files a/docs/img/instructions/img206230.jpg and /dev/null differ diff --git a/docs/img/instructions/img206235.jpg b/docs/img/instructions/img206235.jpg deleted file mode 100644 index b2f3c644..00000000 Binary files a/docs/img/instructions/img206235.jpg and /dev/null differ diff --git a/docs/img/instructions/img206245.jpg b/docs/img/instructions/img206245.jpg deleted file mode 100644 index dfd0d6a2..00000000 Binary files a/docs/img/instructions/img206245.jpg and /dev/null differ diff --git a/docs/img/instructions/img206275.jpg b/docs/img/instructions/img206275.jpg deleted file mode 100644 index b3a63218..00000000 Binary files a/docs/img/instructions/img206275.jpg and /dev/null differ diff --git a/docs/img/instructions/img206310.jpg b/docs/img/instructions/img206310.jpg deleted file mode 100644 index 8b2bf933..00000000 Binary files a/docs/img/instructions/img206310.jpg and /dev/null differ diff --git a/docs/img/instructions/img206315.jpg b/docs/img/instructions/img206315.jpg deleted file mode 100644 index cc764744..00000000 Binary files a/docs/img/instructions/img206315.jpg and /dev/null differ diff --git a/docs/img/instructions/img206510.jpg b/docs/img/instructions/img206510.jpg deleted file mode 100644 index 18233e7f..00000000 Binary files a/docs/img/instructions/img206510.jpg and /dev/null differ diff --git a/docs/img/readme/hover_vid.jpg b/docs/img/readme/hover_vid.jpg deleted file mode 100644 index 9a9b4a0b..00000000 Binary files a/docs/img/readme/hover_vid.jpg and /dev/null differ diff --git a/docs/img/readme/rover_front_small.jpg b/docs/img/readme/rover_front_small.jpg deleted file mode 100644 index 9046c7d0..00000000 Binary files a/docs/img/readme/rover_front_small.jpg and /dev/null differ diff --git a/docs/img/startup/managed_ip.png b/docs/img/startup/managed_ip.png deleted file mode 100644 index 68c90e6d..00000000 Binary files a/docs/img/startup/managed_ip.png and /dev/null differ diff --git a/docs/img/zerotier/client_application_menu.png b/docs/img/zerotier/client_application_menu.png deleted file mode 100644 index 562ae0e4..00000000 Binary files a/docs/img/zerotier/client_application_menu.png and /dev/null differ diff --git a/docs/img/zerotier/enter_network_id.png b/docs/img/zerotier/enter_network_id.png deleted file mode 100644 index e6bf0d25..00000000 Binary files a/docs/img/zerotier/enter_network_id.png and /dev/null differ diff --git a/docs/img/zerotier/hub_menu.png b/docs/img/zerotier/hub_menu.png deleted file mode 100644 index d070a586..00000000 Binary files a/docs/img/zerotier/hub_menu.png and /dev/null differ diff --git a/docs/img/zerotier/hub_network_members.png b/docs/img/zerotier/hub_network_members.png deleted file mode 100644 index 64587188..00000000 Binary files a/docs/img/zerotier/hub_network_members.png and /dev/null differ diff --git a/docs/img/zerotier/hub_overview_small.png b/docs/img/zerotier/hub_overview_small.png deleted file mode 100644 index d66b9482..00000000 Binary files a/docs/img/zerotier/hub_overview_small.png and /dev/null differ diff --git a/docs/img/zerotier/login_email_password.png b/docs/img/zerotier/login_email_password.png deleted file mode 100644 index cd32b25b..00000000 Binary files a/docs/img/zerotier/login_email_password.png and /dev/null differ diff --git a/docs/img/zerotier/login_to_zerotier.png b/docs/img/zerotier/login_to_zerotier.png deleted file mode 100644 index 9804a20f..00000000 Binary files a/docs/img/zerotier/login_to_zerotier.png and /dev/null differ diff --git a/docs/img/zerotier/manually_add_member.png b/docs/img/zerotier/manually_add_member.png deleted file mode 100644 index 5f0f81c0..00000000 Binary files a/docs/img/zerotier/manually_add_member.png and /dev/null differ diff --git a/docs/img/zerotier/network_id_and_name.png b/docs/img/zerotier/network_id_and_name.png deleted file mode 100644 index 52be9f0e..00000000 Binary files a/docs/img/zerotier/network_id_and_name.png and /dev/null differ diff --git a/docs/img/zerotier/your_networks.png b/docs/img/zerotier/your_networks.png deleted file mode 100644 index 6ba1f778..00000000 Binary files a/docs/img/zerotier/your_networks.png and /dev/null differ diff --git a/docs/index.md b/docs/index.md deleted file mode 100644 index d819650e..00000000 --- a/docs/index.md +++ /dev/null @@ -1,39 +0,0 @@ -# Build Your Own Delivery Robot - -![](img/index/rover_garden.jpg) - -Welcome, please feel free to browse or follow along the reading order as outlined below. - -## Introduction - -The main function of our robot is routine inspection and delivering small goods using simple routes. -As there is no driver on board the vehicle it can be smaller, cheaper and consume less energy. -Without a driver on board there is a need for remote supervision by an operator or traffic controller. - -The robot has two modes to give an operator and/or traffic controller supervision over the vehicle and to save time: - -1. Teleoperation - via internet on a browser -1. An autopilot with remote supervision and to train the autopilot, to drive more complex routes - -> The robot is built with *publicly available parts exclusively* and can be assembled by anyone. - -## Get Started -* Get a robot. Use the part list found under [Assemble kit](mwlc_kit.md), the [Assembly schemes](mwlc_as_schemes.md) and [Assembly documentation](mwlc_assembly.md) or place an [Order](mwlc_order.md) if you prefer pre-assembly. -* Setup the vpn. Have a look at our step-by-step [ZeroTierVPN guide](zerotier_manual.md). -* Read about [startup and maintenance](startup_manual.md). -* Connect as your robot's operator via the [Controller and Browser](operator_manual.md). Learn to use the controller and take the first spin. -* Use and train the autoplilot of your robot with according [Training autopilot](autopilot_manual.md). - -> **Developer**? Check out the code on [github](https://github.com/cadenai/byodr). - - -## Disclaimer -*The robot as described in this document is a vehicle for testing software. Do not deploy the robot for applications that carry risk of -any kind. Please pay attention to the guidelines and suggestions given, also for the use and charging of the battery. -No explicit regulatory permission of any kind has been sought for deployment of the robot. -The robot and the information in this document are subject to change. The document and the contents of the document are provided free -of charge. -This document and the robot have been put together with the greatest possible care. We are in no way responsible for any omissions -and inaccuracies with regard to the information provided or the robot, for whatever reason. We do not accept any liability. -The information and the robot provided are subject to certain intellectual property rights. Unless explicitly authorised to do so, -you are not allowed to reproduce information provided for commercial purposes of any kind.* \ No newline at end of file diff --git a/docs/interfaces_manual.md b/docs/interfaces_manual.md deleted file mode 100644 index 7b7ade9f..00000000 --- a/docs/interfaces_manual.md +++ /dev/null @@ -1,36 +0,0 @@ -# Interfaces - -## WAN -The robot several options to connect with internet. -If a WAN connection over ethernet is available the robot will use it. -Plug in the connection into the connector that is the most to the back. -## Lan -To connect to the Lan on the robot plug in an ethernet-connection into the connector that is the most to the front. - -## Ros2 -The robot has a simple ros2 api. -There is a limited set of ros-commands that the robot can handle: - - Switch to autopilot - - Switch to teleoperation - - Set speed -###Node -The node name of the robot is default: rover1 -You can change this in the settings-screen at the robot.(don't change the other settings) - -![](img/autopilot/ros2_node_name_setting.jpg) - -###Ros-commands -To switch to autopilot mode issue the following command: -ros2 topic pub --once /rover1/v10/pilot/set_mode actionlib_msgs/msg/GoalID "{id: 'autopilot'}" - -Set the speed using: -ros2 topic pub --once /rover1/v10/pilot/set_maximum_speed std_msgs/msg/Float32 "{data: 2.0}" - -Switch to teleop with: -ros2 topic pub --once /rover1/v10/pilot/set_mode actionlib_msgs/msg/GoalID "{id: 'teleoperation'}" - -##Other options -For more options check the assembly schemes and assembly documentation. -Check out the code on [github](https://github.com/cadenai/byodr). - - diff --git a/docs/mwlc_as_schemes.md b/docs/mwlc_as_schemes.md deleted file mode 100644 index 53088630..00000000 --- a/docs/mwlc_as_schemes.md +++ /dev/null @@ -1,25 +0,0 @@ -## Schemes -The documentation is made with care but might have mistakes. -Always try to understand the logic. -Documentation is never complete, things keep changing and there is no end to the details. -Note: We are a software organisation. We build this robot on request of our users. -They want robust, easy to maintain and not to expensive, robot built with materials that are available al over the world. -We had the robot design checked by a third party but it's always at your own risk to use the robot. -
-We use schemes as our basis. The other documentation is maintained after the schemes. - - -###Scheme: 204000 Enclosure -Remark: This is not the latest version. -The ethernet-connections are replaced to the most backward holes, just before the ignition key, on the rightside of the enclosure. -![](img/drawings/img204400v011.jpg) - -###Scheme: 206000 Power -##### Battery power -![](img/drawings/img206012v021.jpg) -##### 12vdc power -![](img/drawings/img206020v022.jpg) - -###Scheme: 208000 Communication - -![](img/drawings/img208000v07.jpg) \ No newline at end of file diff --git a/docs/mwlc_assembly.md b/docs/mwlc_assembly.md deleted file mode 100644 index 08abd18d..00000000 --- a/docs/mwlc_assembly.md +++ /dev/null @@ -1,115 +0,0 @@ -# Introduction
-#### Disclaimer
-This documentation provides information intended to help people build a self driving robot and to use it. The development of the robot is ongoing and it is not complete. The information in the documentation is subject to change. The documentation is provided free of charge. The robot as described in the documentation is a vehicle for testing software. We do not deploy the robot for applications that carry risk of any kind. Please pay attention to the guidelines and suggestions given, also for the use and charging of the battery.  No explicit regulatory permission of any kind has been sought for deployment of the robot.  This documentation has been put together with the greatest possible care. We are in no way responsible for any omissions and inaccuracies with regard to the information provided on the website, for whatever reason. We do not accept any liability. The information provided on the website is subject to certain intellectual property rights. Unless explicitly authorized to do so, you are not allowed to reproduce information provided on the website for commercial purposes of any kind. 
-#### Instructions
-The instructions are exported from the development database and used by our technicians. The information might be a bit cryptic.
We are always open for improvements. Don't hesitate to [contact](http://www.mwlc.global/contact/) us, if you need some help or an extra picture.
-#### Content
-The robot is assembled in main stages:
- Chassis
- Body: External skeleton
- Body: Enclosure
- Cables, equipment
- Configuration

-# Chassis
-#### Arrma Kraton 8S
-The Kraton 8s is the largest Arrma model, but much too fast and with minimal carrying capacity.
To be able to carry more payload, the vehicle must be converted so that the motors for steering and driving last longer.
-Before the chassis can work for the robot we have to change some parts.
- Springs
- Steering servo
- Tires
-After unpacking take off:
- Cover Green/Orange ARA409001/7
- 2* Battery holders ARA320496
- Back spoiler, ARA480022
- Spoiler mount, ARA320492
- ARA320485
- 4* Wheels including tires ARA510122/520055/550061
- Some small parts like bolts and nuts
-#### Springs
-Change the springs.
To make the suspension more firm, first we change the springs to reduce pressure on the shocks.
Put the large springs on the place of the originals and the small springs around the shock shaft.
Find a video with instructions.
Note: The front springs are shorter than the back springs(ARA330572 and ARA330573)
-#### Esc tuning
-Change ESC settings according to Kraton 8S instruction manual.
- Low voltage cutoff(variable 3) to Low(setting 1)
- Brake strength(variable 5), 10%(setting 7)
- Reverse strength(variable 6), 75%(setting 3)
- Punch setting(variable 7), level 1(setting 1) Leave battery connected for next step
-#### Change steering servo
-This is a tricky operation, take some time to look for the video's, with instructions.
- Place batteries in hand unit
- Start the Kraton and the hand unit, set the steering wheels in straight forward
- Mark position servo saver both on the saver and the casing with something sharp, like a screwdriver, so you know afterwards if it fits correctly.
- Switch everything off
- Release ESC/Servo unit, 4 screws bottom
- Release cover, 6 small screws
- Release receiver, and wires from the box. Take the antenna wire out of the little tube.
- Release steering rod from servo saver to the steering connector on steering. Watch out not to loose little black lock nut m3, the bold will be replaced by a Allen screw, M3, 22 mm, quality 12.9.
-Release plastic contra bolt in servo saver(hex4). Watch out, releasing is clockwise!!!
- release bolt(hex2,5) far in servo saver to release is form the servo. Release safer with Allen screw driver.
- Take the new servo, connect the wire to the servo first.
- Take the new receiver box. Check if the new servo fits.
- Swap the data cable on the receiver.
Of the content of the package we don’t use the spring and the steering linkage.
Change the servo, take care it is positioned exactly right.
-The steering rod bold should be replaced with an Allen screw M3x22mm, steel 12.9.
-After changing the servo you can remove the receiver
-#### Wire shortcuts
-__ Red wires of the on/off button__
Shortcut the red wires of the on/off button
Use the orange ferrule double and the heat shrink.
Heat shrink 2,5 mm 2:1 covering the ferrule metal tube.
Heats chink 6 mm, 3:1, covering the ferrule.
-#### Cable gland bottom
-Before putting the enclosure on the robot, first put the cable gland around the cable from the ESC.
- Take a M25 Cable gland with cable gland nut, lock nut and outer seal ring.
- Put the gland nut over the cables for the ESC.(2 power cables and 2*2 thin data cables).

-- Take the multi seal or grommet, M25, 3 holes. Cut the holes open to put the cables through.
- First put small cable through one hole then the orange and black from the ESC, each through their own hole.
Then put cable gland around it. Screw the cable gland nut on the cable gland, not to tight.
- Ensure that the shortest wire, the steering servo connector is reachable in the enclosure. The gland must be quite tide to the chassis.
You can check this after making the bottom hole.
-# External skeleton
-The skeleton is built with the aluminum profile. 20 x 20 mm, groove 5.
In the end it should look like the picture.
-![](img/instructions/img204110.jpg)
-We have two kinds, both for groove 5 but 1 kind for Allen screw M5 and 1 kind for Allen screw M4 that is used for the brackets.
These options are also available in plate versions.
The blocks easy to work with but more expensive.
-![](img/instructions/img204120.jpg)
-#### Crossbeams
-Connect 2 crossbeams, 0,23m to the bottom of the chassis.
Use the empty holes from the battery holders and the M4 screws.
-#### Pillars(vertical profiles)
-Attach 4 pillars, 0,14m with brackets on the low crossbeams. The bracket are under the chassis.
-#### Longitudinal beams
-On the pillars the longitudinal beams are fastened.
At the right side 0,79m.
At the left side 0,85m.(longer to connect the back camera)
To protect the front camera, at the front, both beams will end in front of the front camera.
At the back, the right beam stops 2 cm after the end of the enclosure. The left beam goes further to the 6 cm back, to mount the back camera.
The longitudinal beams should be close to the bottom of the enclosure but don't push it up. The enclosure should rest on the towers of the chassis.
-#### Cross beams high
-On the longitudinal beams the high crossbeams of 0,23m are attached. They are close to the enclosure to keep it in place
-#### Lock pillars
-At both the front and the back side, attached to the high crossbeams, there should be the lock pillars of 0,13m.
It might use full to connect the front lock pillar with to brackets as the fornt-side of is often used replace the robot.
-The lock longitudinal beam, 0,90m, wil be attached to the lock pillars.
This beam is a bit longer to catch the hits.
Connect the beam with 2 brackets at each side for easy opening.

-#### Mount front Camera
-Bracket 150*150mm with 3 holes per side
Bracket 60*60mm with per side 2 holes
1* Allan screw M5 6mm
1* Allan screw M5 10mm
Tripod screw 8mm
2 repairwasher m5
2 washer m5
2 Slide nut

-![](img/instructions/img204215.jpg)
-Mount back Camera
-Bracket 150*150mm with 3 holes per side
Tripod screw
1 * Allan screw M5 10
6 washer M5
Slide nut
Extra bracket to lock camera
-# Enclosure
-In the end the enclosure will look more or less like in the picture.
But we have to admit, every build we find an improvement. Less cables, less space used, more easy to open.
In the instructions you find our latest build.
The switch is below the Nano.
Sometimes we get a request for a larger battery. This takes some space.
-![](img/instructions/img204405.jpg)
-__ Silica __
Don't forget to put in a pack of silica gel. But this can be done any time.
-#### Spelsberg AKL-4-t
-The Spelsberg enclosure AKL-4-t has transparant cover, metric pre-embossings.
Slightly cheaper alternative enclosure has a non transparant cover.
The enclosure should rest on the front and back towers of the chassis.
As the center tower is just slightly lower and the bottom of the enclosure has ridges. To make the bottom of the enclosure resting on the towers only, make a cross at the top of the centre tower to fit the enclosure in.
At the end we put some double sided adhesive tape at the towers.
-![](img/instructions/img204415.jpg)
-#### Holes, glands, connectors and buttons
-See scheme voor the location and size of the holes.
We use step drill bit to make or widening the holes.
It might be a suggestion to keep the part that needs the hole, nearby to see if it fits.
We always start with the bottom-hole and make is fit perfectly, even by putting the cables through the gland and see if the enclosure fits well.

-![](img/instructions/img204425.jpg)
-__ Bottom hole __
One of the more tricky parts is the location of the bottom hole for the M25 cable gland with the ESC power and datacables.
The outside bottom of the enclosure has ridge wich should stay intact.
Check with the already attached cable gland the location.
-__ Charge connector __
The screws to tighten the Rosenberger-connector need some space.
Avoid that the hole gets more to the top of the enclosre. While widening we press the drill to the bottom of the enclosure.

-__ Antenna cables __
The 5 antenna cables should be at the back-side left. Bring the cables around 0,15m within the enclosure.
- Put the cable gland nut over the cable connectors
- Take the black single seal ring from the gland and put it also over the connectors.
- Take the multi-seal with 5 holse, cut the holes en put the cables through.
Put around the multi-seal the big single seal from the cable gland.
First put the cable gland over the cables before putting is in het enclosure.
Don't make the cable gland to tide yet.

-__ WAN __
The WAN connector should be right-side of the enlcore most at the back.
The connector at the inside should point up with a small angle to the back.(The Lan connector will be above it) Use a self-adhesive cable mount, under the enclosure, to keep the cap to the vehicle wen not on the connector.
-__ LAN __
The Lan connector should be just above the WAN connector and a bit to the front.

-__ Cable gland back Camera __
Grommet qt 6, around camera cable between camera and first split.
Not too close to split. It should be possible to have the cable bend within the enclosure.
Take the two white parts ant put them around the grommet. The grommet fits just in one position exact right.
Take the sealing ring over the cable and put it around the gland.<> Pucht the gland though the enclosure.
Take the black splittable contra nut and tighten the gland to the enclosure.
- Put the black 0,25m ethernet cable in the connector.
- Put the 0,5m low voltage power cable in the connector.

-__ Cable gland front Camera __
Grommet around camera cable between camera and first split.
Not to close to split.
It should be possible to have the cable bend within the enclosure.
Cabel gland front camera works the same as back.

-__ Ignition switch __
Ignition switch is mounted at the same side as the LAN and WAN, just a bit to the fornt of the LAN.

-__ Stop button __
Stop button is mounted at the opposite side, left, the front, of the enclosure.
-# Cables and equipment
-#### Battery and internal frame
-__ Backside cable duct __
Put a piece, 0,28m, of cable duct against the back.
- Make it fit by taking the back side corners off.
- Take teeth off by the cable cland of the antenna's.
- Take teethparts off for the camera cable gland.
- Take 3 teethparts off for the Rosenbergerconnector.
Depending on the battery it might be handy to split the backside cable duct so it fits at both sides of the length cable duct.

-__Battery __
Put battery at back-left side in the enclosure, against the back side cable duct.

-__ Length cable duct __
Put a 0,58m cable duct in the middel over the full length of the enclosure against the right site of the battery.
- Make the back 3 cm flat so you can put it under the back cable duct or have the backside cable duct at both sides.

-__ Enclosure lay-out __
The battery is the largest component within the enclosure. As soon as the battery is in place, you can make a lay-out for the other components.
Collect the main components that should be in the enclosure and prepare the equiment holders for the Nano, Pi and the two USB-relays.
Think about how to fasten the ethernet-switch.
The best opption is not to need two layers and have all equipment on the bottom of the enclosure and on top of the battery.
The battery should be fixated.

-#### Equipment holders
-__ Holder for the Nano steered Relay __
Use Axxatronic 72mm:
- 2 end parts
- 1 narrow part
- 1 widepart.

-__ Holder for the Pi steered Relay __
See Relay Nano steered holder.

-__ Nano holder __
First, place SD card.
- Take PVC plate 107 x 100mm (or 72x80mm, if there is not a lot of space.
- Take 4 allen screws, 2,5x10mm
- Take 4 lock nuts 2,5mm
- Take 4 spacers
- Assemble Nano on the pvc plate, nuts topside, cable connections on the nano to the left.
- Put the plate with Nano on axxatronic 107mm( or 72mm) with foot.

-__ Pi holder __
First, place SD card.
Then mount the hat on the Pi with the mounting set that comes with the Pi - Take PVC plate 80 x 72mm
- Assemble the hat on the Pi on the pvc plate, wiIth the spacers and screws from the set delivered with the hat.
- Assemble the Pi on the pvc plate, wiIth allen screw 2,5x12.
- Put the plate with Pi on Axxatronic 72mm with foot.

-__Optional: internal frame __
You can use this if you can't find enough space on the bottom of the enclosure.
The internal frame consists of a few pieces of alluminium profile.
A small pillar close to the battery.
Connected to a horizontal piece to the front of the enclosure, that is conncted with a crossbeam to keep it stable.
Make a fitting frame.
3 pieces of profile:
- Pillar, from the bottom of the enclosure to the cover.
- Longitudinal, from the baterry to the front side.
- Crossbeam, at the front from left to right.

-__ Din rail __
1. Put 0,58m din rail over the length of the right side of the enclosure but not on the incoming ESC cables.
Put it against the backside cable duct.
2. Put din rail over the length of the left side of the enclosure. From the battery to the inner front side of the enclosure.
Put it against the backside cable duct.

-For a second layer you have to improvise with the din rail. For instance:
- ethernet switch to crossbeam profile of internal frame, 0,05m
- Computers, 0,25m
- 12vdc converter, 0,10m
- 2*5vdc converter, 0,10m
-# Cabling
-Start with putting in all component, losely, in the enclosure.
After al cables and wires are connected, fixate the component.
It might be usefull to have the schemes at hand.
While doing we also connect the ethernet cables, so keeps this scheme also nearby.

-__Hints__
Step by step the cabling voor the power will be assembled.
To avoid problems also some equipment and computer cables will be placed.
Some basics:
- Put cables and wires as much as possible in the cable ducts, without forcing of course.
- Use ferrules and cables with the right colours. This will help you later on.
- We connect the power per cicuit. I suppose there is more efficient methode but per circuit is simple.
- The cable length is more complex as we thought. If there are lengths on the scheme, just ignore them.
- I tailor the length on the moment I put it in. Take a wire in the right colour, put a ferrule on it. Connect 1 side. Lay it via the cable duct to the other end. Then cut it to the right length and put a ferrule on it.
- Not all wire have a ferrule at both sides. For instance in the fuse-holders and the RUT955 green terminal.
- Worst mistake is not tighten a wire enough to a connector. This is hard to test.
- The schemes with remarks are leading.
- Our working order is described.

-__ Back camera __
Fold the ethernet cable and power cable in the cable duct.
Plug in power wires in accoring to scheme, lay them in de cable duct direction RUT-distribution.
Connect the Ethernet with the router with a ethernet cable according to scheme.
Fold them in cable duct.

-__ WAN __
Put ethernet cable in.

-![](img/instructions/img206120.jpg)
-__ LAN __
Put ethernet cable in.

-__Ignition switch__
Put the adaptor plate horizontal on the inginition switch.
Put the NO-contact breaker on the lower position of the adaptor plate

-__ Stopbutton __
Put the adaptor plate on the stop button.
Put the NC-contact breaker on the adaptor plate
The positon can be choosen as suits you the best.
-__Battery and charge__ Place 4 fuseholders on the din rail, between the ethernet passthrough's and the adaptar plate from the ignition switch.
Place 2 red terminal blocks on the din rail(option to disconnect on inside).
Place 2 black terminal blocks on the din rail(option to disconnect on inside).
Option: if the battery cable is more the 4mm2, take thick terminal block.
Option: if the battery has seperate cable for plus(red)-pole, take extra red terminal block.
Option: if the battery has seperate cable for minus(black)-pole, take extra black terminal block.
Put a shield against the open side of the last terminal block.
- Red wire from battery in the first fuse holder.
- Red wire from first fuseholder to first red terminal block.
- Red wire from charge connector into second fuse holder.
- Red wire from second fuseholder to second red terminal block.
- Option: If there is seperate extra red charge wire on the battery, connect this to the extra terminal block and lay the red wire from the second fuse not to the red terminal block but also to the extra red teminal block.
- Black wire from battery into (thick) black terminal block
- Black wire from charge connector into black terminal block
- Option: If there is seperate extra black charge wire on the battery put both the charging wire as the wire from the charger into the extra black terminal block.
-Create main distribution:
- Connect the black terminal blocks with bridge.
- Connect the red terminal blocks with bridge.
This thus not include the optional charge terminal blocks.
-__ Terminal blocks ESC input __
Put the terminals in back from the cable-gland and connect the power.

-![](img/instructions/img206230.jpg)
-__ SSR(Steady State Relay)__
Put the SSR losely on the din rail in the second part of the enclosure.
Connect the power according to scheme, via the third fuse.
After connecting the control wires in port 1 and 2, the SSR can be fixated.

-![](img/instructions/img206235.jpg)
-__ Power cabling to 12VDC __
Put the 12vdc losely in place.
The 12vdc power start with the ignition key. After the ignition al other system will be started.
Connect according to scheme with the battery power.
The incoming power should be place against the wall of the enclosure.

-![](img/instructions/img206275.jpg)
-__ Rut955 PoE __
Connect power from the first set of 12vdc-converter with the PoE-injector for the RUT955 via the fourth fuse.

-__ Place RUT-distrbution __
Put beside the ESC connector:
- 6 red thin terminal blocks
- Connect them with a 5 bridge and a 2 bridge
- 6 black thin terminal blocks
- Connect them with a 5 bridge and a 2 bridge
Put them losely.
Plug in the incoming wires from the power source in the back side.
Then you can fixate them.
Put wires to the end-power consumer on the fornt.

-![](img/instructions/img206310.jpg)
-__ Connect RUT-distribution __
See scheme.
This goes via the relay on the RUT955.

-![](img/instructions/img206315.jpg)
-__ Ethernet switch __
Connect the power of the ethernet switch.

-![](img/instructions/img206510.jpg)
-__ Front camera __
Plug in power wires in according to scheme
-__ Back camera __
Plug in power wires in according to scheme
-__ Power Nano__
Plug in power wires in according to scheme, via a 5vdc converter

-__ Power Pi __
Plug in power wires in according to scheme via the 5vdc-conveter and the stop-button.

-__Remaing SSR control wires__
Make the remaing connetions of the control for the SSR with the RUT-distribution.

-__ Grove cable Pi to steering and ESC __
Plug in Grove-cable in according to scheme.

-__ Ethernet Pi Nano front camera __
Plug in power wires in according to scheme
-# Testing
-When starting upo the following light and sound should apperar:
- Blue 12vdc
- Orange RUT
Wait
- 5vdc
- Camera check
- Lights ethernet
- Lights Nano
- Light Nano steered relay
- Lights Pi
-Close the encloser and use the locker profile to fixate the encloser on the chassis.

-# Questions and suggestion
-The assembly instructions are user driven and created based on questions to give guidance on certain aspects.
If you have any question, don't hesitate to contact us at [www.mwlc.global/contact/](http://www.mwlc.global/contact/).
diff --git a/docs/mwlc_kit.md b/docs/mwlc_kit.md deleted file mode 100644 index 8d52e830..00000000 --- a/docs/mwlc_kit.md +++ /dev/null @@ -1,146 +0,0 @@ -## Parts list - -The parts are summarised per subsystem. These groups are also used to differentiate the assembly stages. -- Chassis -- Body -- Power -- ICT -- Communication and sensors -Remark: Our computers use the metric system. A comma, ',', is used for decimals. -###Chassis
-The basis for the chassis is a standard RC-vehicle. We use the Kraton 8s. The main changes we make are:
- Remove cover, receiver, spoiler, battery holders and some smaller parts.
- Make the steering stronger.
- Make it manageable for the computers
- Remove the start/stop-option from the button.
- Make ity possible to carry the equipment an a payload.
- Depending on the application, we also change wheels to have tires with harder compound.
- 1 [Arrma Kraton 8S](https://www.arrma-rc.com/kraton8s/)
- 1 [Servo](https://spektrumrc.com/Products/Default.aspx?ProdId=SPMSS9120BL)
- 4 [Springs C1225-125-5000M](https://www.amatec.nl/nl/c1225-125-5000m.html)
- 4 [Springs small D13050](https://www.amatec.nl/nl/d13050.html)
-###Body: External skeleton -The external skeleton is a component of the body.
The main components of the body:
A. The external skeleton to mount the enclosure, camera's and antennas with the chassis.
B. The enclosure that hold the ICT equipment
C. The internal skeleton and materials for mounting the equipment
For the external skeleton we use 20 x 20 mm aluminum profiles with the appropriate connections.
This profile has what's called a groove 5, but the brackets to make connections use bolts M4.
Probably you can order this at you local hardware-store.
The aluminum profile is also used for the internal frame, but not mentioned any more in that list.
- 4,96 [Aluminium Profile 20 x 20 groove 5](https://www.boikon.com/webshop/aluminium-profiles/alu-profile-20-x-20-groove-5/)
- 2 [Bracket 150*150mm(camera mount)](https://www.shi.nl/nl/catalog/ijzerwaren/balk-ankerwerk/hoeken/starx/drempelhoek/4691180/groups/g+c+sg+bl+a+nr+view)
-21 [Bracket 20 x 20, basic](https://www.boikon.com/webshop/fastenings/bracket-20-x-20-basic/)
- 3 [Cover bracket 20 x 20](https://www.boikon.com/webshop/fastenings/cover-bracket-20-x-20-basic/)
-10 [Cover cap 20 x 20](https://www.boikon.com/webshop/finishing-elements/cover-cap-20-x-20/)
-###Body: Enclosure -With the enclosure the ICT-equipment is as much as possible protected against dust and water.
To bring al cabling into the enclosure cable glands are used.
With the correct assembly at least protection level IP54 should be reached.
- 1 [Cable gland M25 Wiska ESKV-SET 25 ](https://www.wiska.com/en/30/pde/10066413/eskv-25.html)
- 1 [Enclosure 600 x 300 x 132 transparant cover](https://www.spelsberg.com/industrial-housing/combinable-with-knock-outs/74400401/)
- 1 [LAPP SKINTOP® DIX-M25 Multi-seal inset M25 Nitrile rubber Black](https://www.conrad.com/p/lapp-skintop-dix-m25-multi-seal-inset-m25-nitrile-rubber-black-1-pcs-527167)
- 2 [Locknut M20 seperatable Cofix](https://www.fraenkische.com/en/product/cofix-gegenmutter)
- 2 [QT 6 Cable grommets](https://www.icotek.com/en/product-catalogue/cable-entry-systems/cable-grommets/qt/)
- 2 [QVT 20, Split cable gland](https://www.icotek.com/en/product-catalogue/cable-glands/qvt/)
- 1 [Self-adhesive cable mount]()
- 2 [Silica gel 10 gram](https://www.conrad.com/p/silica-gel-sachet-10-g-l-x-w-x-h-72-x-57-x-3-mm-transparent-silica-gel-10-pcs-2201308)
- 1 [TRU COMPONENTS TC-MH14-5A203 Multi-seal inset M16 Rubber Black](https://www.conrad.com/p/tru-components-tc-mh14-5a203-multi-seal-inset-m16-rubber-black-1-pcs-1593553?searchTerm=1593553&searchType=suggest&searchSuggest=product)
- 1 [Wiska ESKV-RDE 20 Cable gland M20 Polyamide Grey-white (RAL 7035)](https://www.wiska.com/en/30/pde/10064986/eskv-rde-20.html)
-###Body: Internal skeleton and mounting -Within the enclosure the equipment has to be mounted.
Al equipment has Din-rail mount options.
-10 [Axxatronic PCB holder 72mm*11,25mm](https://www.axxatronic.de/hutschienengehaeuse-und-schaltschrankgehaeuse/platinenhalter-serie-cime-72mm-cime-m-be1125.html)
- 4 [Axxatronic PCB End holder 72mm*11,25mm ](https://www.axxatronic.de/hutschienengehaeuse-und-schaltschrankgehaeuse/platinenhalter-serie-cime-72mm-cime-m-be1125.html)
- 0,8 [Cable duct](https://www.conrad.com/p/basetech-bt-2226749-cable-duct-l-x-w-x-h-2000-x-25-x-45-mm-1-pcs-grey-2226749)
- 1,1 [HellermannTyton DELTA-3F/BV DIN rail perforated Steel plate](https://www.hellermanntyton.de/produkte/verdrahtungskanaele-und-zubehoer/delta-3f/181-47061)
- 2 [PCB holder main, 107mm, 35mm with foot](https://www.axxatronic.de/hutschienengehaeuse-und-schaltschrankgehaeuse/serie-cime.html)
- 1,02 [PVC plate](https://kunststofplatenshop.nl/product/hard-pvc-donkergrijs-2-mm-ral-7011/)
- [Self-tapping screws 4.2 mm 13 mm](https://www.conrad.com/p/toolcraft-141343-self-tapping-screws-42-mm-13-mm-hex-head-din-7504-steel-zinc-galvanized-100-pcs-141343)
- 8 [Spacer (Ø x L) 5 mm x 6 mm ](https://www.conrad.nl/p/afstandsbouten-x-l-5-mm-x-6-mm-polystereen-1-stuks-540110)
-###Body: Bolts, nuts, washers and fasteners -For the body lots of bolts and nuts are used.
- 0,1 [3M Double sided adhesive tape](https://multimedia.3m.com/mws/media/82874O/4611f-high-grade-double-coated-foam-tape.pdf)
- 4 [Allen screw M3 x 16](https://www.conrad.com/p/toolcraft-114518-allen-screws-m3-16-mm-hex-socket-allen-din-912-steel-100-pcs-114518)
-37 [Allen screw M4 x 10](https://www.boikon.com/webshop/fastenings/cylindrical-socket-screw-din-912-zp-m4-x-10/)
- 2 [Allen screw M4 x 25](https://www.conrad.com/p/toolcraft-1068363-allen-screws-m4-25-mm-hex-socket-allen-din-7984-stainless-steel-a2-100-pcs-1068363)
- 4 [Allen screw M5 X 10](https://www.boikon.com/webshop/fastenings/cylindrical-socket-screw-din-912-zp-m5-x-10/)
- 6 [Allen screw M5 X 6](https://www.boikon.com/webshop/fastenings/cylindrical-socket-screw-din-912-zp-m5-x-10/)
- 8 [Allen screws M2,5 16 mm](https://www.conrad.com/p/toolcraft-1061812-allen-screws-m25-16-mm-hex-socket-allen-din-912-stainless-steel-a2-100-pcs-1061812)
- 4 [Locknut M2,5](https://www.conrad.com/p/toolcraft-221969-locknut-m25-din-985-steel-zinc-plated-10-pcs-221969)
- 4 [Locknut M3](https://www.conrad.com/p/toolcraft-812808-locknuts-m3-din-985-steel-zinc-plated-100-pcs-812808)
-13 [Repair washer M5](https://www.boikon.com/webshop/fastenings/repair-washer-din-9021-zp-m5/)
-37 [Sliding block 5 - M4](https://www.boikon.com/webshop/fastenings/sliding-block-5-m4/)
-11 [Sliding block 5 - M5](https://www.boikon.com/webshop/fastenings/sliding-block-5-m5/)
- 2 [Threaded plate 5 - M4](https://www.boikon.com/webshop/fastenings/threaded-plate-5-m4/)
- 2 [Tripod screw 1/4”](https://www.caruba.com)
-37 [Washer M4](https://www.boikon.com/webshop/fastenings/washer-din-125-1a-zp-m4/)
-###Power: Sources and converters -The main system uses a 7s battery-pack. The voltage gets lower the more empty the battery is.
- 1 [Battery 400Wh, 25V, 7s5p](http://www.emergostar.com)
- 1 [DC/DC Converter 12vdc 60W Rail mounted, 18-75vdc](https://www.meanwell.com/webapp/product/search.aspx?prod=DDR-60)
- 2 [DDR-15G-5 DIN-rail](https://www.meanwell.com/webapp/product/search.aspx?prod=DDR-15)
- 1 [Victron charger 24V 2,5/5A](https://www.victronenergy.com/chargers/blue-smart-ip65-charger)
-###Power: Connectors -If possible din rail mounted connectors are used.
- 1 [Connector, RoPD 25A, C003-04-2000-C](https://www.rosenberger.com/en/products/automotive/ropd.php)
- 1 [Connector, RoPD 25A, C003-B1-500-C](https://www.rosenberger.com/en/products/automotive/ropd.php)
- 2 [Plug-in bridge - FBS 5-5](https://www.phoenixcontact.com/online/portal/us/?uri=pxc-oc-itemdetail:pid=3030190&library=usen&pcck=P-15-07&tab=1&selectedCategory=ALL)
- 2 [Plug-in bridge FBS 2-5](https://www.phoenixcontact.com/online/portal/nl/?uri=pxc-oc-itemdetail:pid=3030161&library=nlnl&pcck=P&tab=1&selectedCategory=ALL)
- 1 [PoE-injector passive](https://www.digitus.info/en/products/active-network-components/power-over-ethernet-poe/poe-injectors/dn-95002/)
- 2 [Terminal Block, 10 mm² Black, Phoenix Contact UT 6 3044131 2 0.2 mm² ](https://www.phoenixcontact.com/online/portal/us/?uri=pxc-oc-itemdetail:pid=3045208&library=usen&pcck=P-15-01-02-01&tab=1&selectedCategory=ALL)
- 1 [Terminal Block, 10 mm² Red, Phoenix Contact UT 6 RD 3044144 ](https://www.phoenixcontact.com/online/portal/us?uri=pxc-oc-itemdetail:pid=3045185&library=usen&tab=1)
- 7 [Terminal block, 2,5mm2, black, Phoenix Contact UT 2,5 bk ](https://www.phoenixcontact.com/online/portal/us/?uri=pxc-oc-itemdetail:pid=3045088&library=usen&pcck=P-15-01-02-01&tab=1&selectedCategory=ALL)
- 8 [Terminal block, 2,5mm2, red, Phoenix Contact UT 2,5 RD ](https://www.phoenixcontact.com/online/portal/us/?uri=pxc-oc-itemdetail:pid=3045062&library=usen&pcck=P-15-01-02-01&tab=1&selectedCategory=ALL)
-###Power: Circuit breakers -To activate en deactivate the equipment, lots of breakers are used.
- 1 [BACO 223963 L21NK00 Key Switch, 22 MM L21NK00 N/A](http://bacocontrols.com/wp-content/uploads/2018/09/Baco-Controls-22mm-Quick-Reference-Guide.pdf)
- 1 [BACO 33E01 Contact breaker momentary NC](http://bacocontrols.com/wp-content/uploads/2018/09/Baco-Controls-22mm-Quick-Reference-Guide.pdf)
- 1 [BACO 33E10 contact element NO](http://bacocontrols.com/wp-content/uploads/2018/09/Baco-Controls-22mm-Quick-Reference-Guide.pdf)
- 2 [BACO BA222968 333E Adapter Plate 3-way](http://bacocontrols.com/wp-content/uploads/2018/09/Baco-Controls-22mm-Quick-Reference-Guide.pdf)
- 1 [BACO BAL21AL10 L21AL10 Red](http://bacocontrols.com/wp-content/uploads/2018/09/Baco-Controls-22mm-Quick-Reference-Guide.pdf)
- 1 [Fuse 15A](http://www.mta.it/en/automotive-fuses-catalogue)
- 1 [Fuse 1A](http://www.mta.it/en/automotive-fuses-catalogue)
- 1 [Fuse 20A](http://www.mta.it/en/automotive-fuses-catalogue)
- 1 [Fuse 5A](http://www.mta.it/en/automotive-fuses-catalogue)
- 2 [Relay 5Volt USB NO / NC](https://www.conrad.com/p/conrad-components-393905-relay-card-component-5-v-dc-393905)
- 4 [WAGO 2006-1681 Fuse terminal 7.50 mm](https://www.wago.com/us/rail-chassis-terminal-blocks/2-conductor-fuse-terminal-block-for-automotive-blade-style-fuses/p/2006-1681)
-###Power: Wires and ferrules -To identify the different connections, different colors are used.
- 2 [Ferrule 0,34mm2 x 6 roze, micro usb to Pi](https://www.vogtshop.ch/index.cfm?content=productData&Language=1&TreeID=D9CAD446-200B-4BED-A9E0-8983A8022FFC&ObjId=6F4D89D7-5D65-4A1A-8E62-43D50D553427&sId=BAB90BCD-E0B2-4BF3-B775-41EC38DB71FB)
- 4 [Ferrule 0,50mm2 x 8 orange](https://www.conrad.com/p/tru-components-1091293-ferrule-050-mm-partially-insulated-orange-100-pcs-1571000)
- 4 [Ferrule 0,50mm2 x 8, white PoE, front cam](https://www.phoenixcontact.com/online/portal/de?uri=pxc-oc-itemdetail:pid=3200522&library=dede&tab=1)
-14 [Ferrule 0,75mm2 x 8 Darkblue(RUT-distributie, Nano)](https://www.vogtshop.ch/index.cfm?content=productData&Language=2&TreeID=FABA8DED-9F13-4A24-AE87-AA62FA29838B&ObjId=F9319843-0875-45B0-9D2C-E3C553AB971C&sId=029FAAF1-8DB7-415D-A8F1-A0B5FE3A07CA)
- 4 [Ferrule 0,75mm2 x 8 grey (Pi)](https://www.phoenixcontact.com/online/portal/de?uri=pxc-oc-itemdetail:pid=3200522&library=dede&tab=1)
- 2 [Ferrule 0.50mm2 * 8 twin Orange](https://www.conrad.com/p/conrad-components-1091317-twin-ferrule-050-mm-partially-insulated-orange-100-pcs-1091317)
- 2 [Ferrule 10 mm2 x 12 (battery, ESC), non isolated](https://www.conrad.nl/p/tru-components-1091264-adereindhulzen-10-mm-x-12-mm-ongeisoleerd-metaal-100-stuks-1570994)
- 2 [Ferrule 1mm2 x 8 Yellow (Pi+Switch)](https://www.vogtshop.ch/index.cfm?content=productData&Language=1&TreeID=66C89F38-3460-455D-858E-5A07CE30494C&ObjId=7D68ACA0-1E38-4B8C-8D16-5E461D337A89&sId=0BE48027-904B-4356-BFF1-2CCEF2FB516B)
- 6 [Ferrule 2,5mm2 x 8 grey(12vdc)](https://catalog.weidmueller.com/catalog/Start.do?ObjectID=9021070000)
- 8 [Ferrule 2,5mm2 x 8, blue(ESC) Phoenix Contact](https://www.phoenixcontact.com/online/portal/de?uri=pxc-oc-itemdetail:pid=3200522&library=dede&tab=1)
- 0,2 [FLRY-B 0.75 mm², black green(Switch)](https://d1619fmrcx9c43.cloudfront.net/fileadmin/automotive_cables/publications/catalogues/single-core_automotive_cables.pdf?1460983564)
- 1 [FLRY-B 0.75 mm², black white(Switch)](https://d1619fmrcx9c43.cloudfront.net/fileadmin/automotive_cables/publications/catalogues/single-core_automotive_cables.pdf?1460983564)
- 1 [FLRY-B 0.75 mm², black yellow(RUT-distribution)](https://d1619fmrcx9c43.cloudfront.net/fileadmin/automotive_cables/publications/catalogues/single-core_automotive_cables.pdf?1460983564)
- 2 [FLRY-B 0.75 mm², red green(Nano)](https://d1619fmrcx9c43.cloudfront.net/fileadmin/automotive_cables/publications/catalogues/single-core_automotive_cables.pdf?1460983564)
- 1 [FLRY-B 0.75 mm², red white(router, switch)](https://d1619fmrcx9c43.cloudfront.net/fileadmin/automotive_cables/publications/catalogues/single-core_automotive_cables.pdf?1460983564)
- 1,15 [FLRY-B 0.75 mm², red yellow(RUT-distribution)](https://d1619fmrcx9c43.cloudfront.net/fileadmin/automotive_cables/publications/catalogues/single-core_automotive_cables.pdf?1460983564)
- 2 [FLRY-B 0.75 mm², red(Pi)](https://d1619fmrcx9c43.cloudfront.net/fileadmin/automotive_cables/publications/catalogues/single-core_automotive_cables.pdf?1460983564)
- 1 [FLRY-B 1.50 mm², black(fase out)](https://d1619fmrcx9c43.cloudfront.net/fileadmin/automotive_cables/publications/catalogues/single-core_automotive_cables.pdf?1460983564)
- 1,2 [FLRY-B 2,50 mm², black](https://d1619fmrcx9c43.cloudfront.net/fileadmin/automotive_cables/publications/catalogues/single-core_automotive_cables.pdf?1460983564)
- 2,3 [FLRY-B 2,50 mm², red](https://d1619fmrcx9c43.cloudfront.net/fileadmin/automotive_cables/publications/catalogues/single-core_automotive_cables.pdf?1460983564)
- 0,05 [Heatshrink medium](https://www.conrad.com/p/tru-components-1225494-heatshrink-wo-adhesive-white-127-mm-shrinkage21-sold-per-metre-1572508)
- 0,05 [Heatshrink small](https://www.conrad.com/p/tru-components-1225468-heatshrink-wo-adhesive-yellow-450-mm-shrinkage21-sold-per-metre-1571044)
- 4 [Low power plug 5.50 mm 2.10 mm, wired](https://www.conrad.com/p/tru-components-low-power-cable-low-power-plug-sony-xperia-550-mm-210-mm-100-m-1-pcs-1715033)
- 1 [Power cable Raspberry Pi [1x USB-C plug - 1x Sony Xperia] 1.00 m Black](https://www.conrad.nl/p/tru-components-stroomkabel-raspberry-pi-1x-usb-c-stekker-1x-open-einde-100-m-zwart-2247651)
-###ICT components -A robot needs a lot of computer power.
- 1 [Connection set for Raspberry Pi Hat](https://wiki.seeedstudio.com/Grove_Base_Hat_for_Raspberry_Pi/)
- 1 [Cooling Fan for Jetson Nano](https://www.waveshare.com/fan-4010-5v.htm)
- 1 [Grove Base Hat for Raspberry Pi](https://wiki.seeedstudio.com/Grove_Base_Hat_for_Raspberry_Pi/)
- 1 [Nvdia Jetson Nano Developpers](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/)
- 1 [Raspberry Pi 4 B / 2GB](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/specifications/)
-###Communication and sensors -The robot stays in contact with the operators and needs lots of information from the surrounding.
- 2 [Micro SD 128 GB Sandisk](https://kb.sandisk.com/app/answers/detail/a_id/22697/~/microsd%2Fmicrosdhc%2Fmicrosdxc-card-support-information-page)
- 2 [PTZ Camera Hikvision ](https://www.hikvision.com/en/products/IP-Products/PTZ-Cameras/Value-Series/DS-2DE2A204IW-DE3/)
- 1 [Rut955 Teltonica](https://teltonika-networks.com/product/rut955/)
- 1 [SIM](https://en.wikipedia.org/wiki/SIM_card)
- 1 [Switch ethernet](https://www.wachendorff-prozesstechnik.de/ETHSW50K/)
-###Communication: Computer cables -For the ethernet, different colors are used.
- 2 [Amphenol LTW 2610-0402-01 data cable Socket, right angle](https://www.amphenolltw.com/index.php)
- 2 [Amphenol Protective cap](https://www.amphenolltw.com/p7-search.php)
- 2 [Cable USB 2.0 A to USB B](https://www.techly.com/usb-2-0-cable-a-male-b-male-angled-0-5m.html)
- 1 [Ethernet 25 cm black front cam](https://www.conrad.com/p/basetech-rj45-network-cable-patch-cable-cat-5e-u-utp-25.00-cm-black-incl.-detent-1717495)
- 1 [Ethernet 25 cm green Nano](https://www.conrad.com/p/basetech-rj45-network-cable-patch-cable-cat-5e-u-utp-25.00-cm-green-incl.-detent-1717481)
- 1 [Ethernet 25 cm grey back cam](https://www.conrad.com/p/basetech-rj45-network-cable-patch-cable-cat-5e-uutp-2500-cm-grey-incl-detent-1717475)
- 2 [Ethernet 25 cm yellow router](https://www.conrad.com/p/basetech-rj45-network-cable-patch-cable-cat-5e-uutp-2500-cm-yellow-incl-detent-1717472)
- 1 [Ethernet 50 cm Blue Lan](https://www.conrad.com/p/basetech-rj45-network-cable-patch-cable-cat-5e-uutp-050-m-blue-incl-detent-1717505)
- 1 [Ethernet 50 cm red Pi](https://www.conrad.com/p/basetech-rj45-network-cable-patch-cable-cat-5e-u-utp-25.00-cm-red-incl.-detent-1717510)
- 1 [Ethernet 50 cm white Wan](https://www.conrad.com/p/basetech-rj45-network-cable-patch-cable-cat-5e-uutp-050-m-white-incl-detent-1717520)
- -You can put this all together. The parts are obvious to use. -If you have any question, don't hesitate to contact us at [www.mwlc.global/contact/](http://www.mwlc.global/contact/). \ No newline at end of file diff --git a/docs/mwlc_order.md b/docs/mwlc_order.md deleted file mode 100644 index 807abae3..00000000 --- a/docs/mwlc_order.md +++ /dev/null @@ -1,5 +0,0 @@ -## Order a robot - -TBD - -In the meantime please contact us at [www.mwlc.global/contact/](http://www.mwlc.global/contact/). \ No newline at end of file diff --git a/docs/operator_manual.md b/docs/operator_manual.md deleted file mode 100644 index d2081082..00000000 --- a/docs/operator_manual.md +++ /dev/null @@ -1,347 +0,0 @@ -# Controller and Browser - -## Quick start - -### Stop the robot by the steward - -Press the red button on the outside of the casing. - -![](img/autopilot/rover_front.jpg) - -On the left-side of the robot there is a red button that stops the engine when pressed. -The steward should stay left of the robot in reach distance of the button. - - -### Certificate -If you need a certificate: Download the certificate from web-server of the robot and set it to trusted on your computer. -Some more information is found under the paragraph: Get the certificate - -### The Robot -Turn the key to the vertical position with the white indicator pointing upwards. -Wait a few minutes until the robot's internal lights are on and you can hear the sound of fans. - -### Browser -On your computer open chrome and type in the robot's [ZeroTier managed IP](zerotier_manual.md) in the browser address bar. - -![](img/controller/ui_overview.jpg) - -On connection success the browser window should look like this (with a different view). -In the top section of the screen you can see status information. - -### Stop the robot by the operator - -Press the `east` button twice to stop the robot. - -![](img/controller/ps4_teleop.jpg) - -`east` is the rightmost button, red or with a red circle -### Controller in Teleoperation -Please use the triggers carefully, the robot may be more powerful than you expect. -- Go to teleoperation and start engine: Press the `east` button. -- Drive forward: Press carefully the `left trigger`. -- Steering: The `right joystick`. -- Go to reverse: press the `right trigger` totally down and release it. -- Drive backwards: If in reverse, press carefully the `right trigger`. -- Active engine braking in forward-mode: The `right trigger`. - -> The engine is turned off after a few minutes of inactivity, press `east` to turn it back on. - -### Controller in Autopilot -* Activate the autopilot: Press `north` -* Drive: After starting the autopilot the max speed is always zero, which also means the the robot doesn't move. To have the robot drive the maximum speed should be above zero. -Press `up arrow` to increase the maximum speed and have the robot driving. -* Press `down arrow` to reduce the maximum speed or come to a stop - ---- - -## Controller -### Supported controllers -Chrome has been tested with recent xbox controllers, both wired and wireless, and recent wireless ps4 controllers. -These type of controllers do more or less the same but use different button labels and graphics. - -> Other browser/controller combinations may seem to work on first inspection - they get the 'standard' mapping in the html5 gamepad api - -> but be aware of unforeseen consequences due to ongoing developments in the api and browser stacks. - -![](img/controller/xbox_white.png) - -The controller has: -* Two joysticks on the top face: One on the left and one on the right -* Two groups of four buttons on the top face: In this manual we name them: `left-, right-, up-, and down-arrow`
 - The buttons in the right group are named: `north, south, east and west` -* On the front of the controller there two buttons at each side. They are named `left button` and `right button` - The bottom triggers are called `left trigger` and `right trigger` - -Control and acceleration values are sent from the controller via the computer to the robot. -In addition the controller is used to send instructions to the autopilot. -The controller has two modes, teleoperation and autopilot. - -![](img/controller/ps4_overview.jpg) - - -### Stop -Press `east` twice: -* if the controller or computer is in sleep mode; wake them up, go to teleoperation mode and stop -* if the robot's engine is in sleep mode, start it, don’t drive the robot -* if the robot is in autopilot mode; go to teleoperation mode and stop - -![](img/controller/ps4_teleop.jpg) - -> Warning: if connections are lost, the robot will not respond. If you have physical access to the robot, press the red button, use the key -> or simply move the robot aside by force. - - -## The Browser -The browser has to support the controller and Java-script. -Voor testing Google Chrome for MacOS is used. -### Teleoperation -#### Warnings -If applicable in the top of the window a warning will appear. There are several warnings possible. -![](img/controller/msg_controller_not_detected.png) -Controller not detected. Try to press `east` to get it online. - - -![](img/controller/msg_another_user_in_control.png) -Another user in control. This can be more difficult to solve; there might be another user in control or perhaps another browser, window or tab on the same computer is connected. - - -![](img/controller/msg_connection_lost.png) - -Connection lost. This can have many reasons; please refer to the manual [Connections and ZeroTier](zerotier_manual.md) which includes a section on problem solving. - -#### Options - -![](img/controller/ui_menu_options.png) - -* Click the “hamburger” icon (leftmost) to:
 - - Stop the data stream from the camera to your computer - - Go to the settings screen. To return to the main screen use the browser's ‘Back’ button. - - The settings itself are only used for development purposes. Check github for more information. -* h264: Stream from the camera based on system settings. -* mjpeg: Stream from the camera with manual compression control. Use the up- and down arrows to change the image quality in order to - minimize latency. - -> These settings can be used for teleoperation and do not affect the autopilot. -> In bad connection circumstances it can be necessary to manually set low image quality just to obtain acceptable latencies. - -####Backcamera -Right of the options there is the small window with the view from the back-camera. -Click on it to make a larger window visible above the large window of the front-camera. -Click on it again to hide the large window. - -####Feedback - -![](img/controller/teleop_feedback.jpg) -Centered in the top of the window is the status about the mode and the speed. - -![](img/controller/wheel_black.png) -![](img/controller/wheel_red.png) -Beside the speed you find the feedback of the position of the wheels via a steering wheel icon. -The steering wheel rotates on screen: -* Indicating the current steering value from the controller
 -* When grey: The robot recognises its driving corridor but the robot reacts only on operator input as a result of teleoperation mode -* When red: The robot does not recognise its driving corridor - -> A grey steering wheel indicates the autopilot is ready to drive at the moment. - -### Autopilot - -#### Warnings and options -See section teleoperation - -#### Input status - -![](img/controller/dash_center_autopilot.png) - -'AUTO': Autopilot mode - '10': Autopilot is set at maximum speed 10 km/h. - -#### Feedback status -The steering wheel rotates on screen indicating the current steering value from the autopilot. -The speed value displayed is the maximum set value minus several autopilot estimates of the current driving situation -![](img/controller/wheel_blue_speed.png) -When blue, the robot recognises its corridor and that is has free passage. -![](img/controller/wheel_red_speed.png) -When red there are several options: -- The robot does not recognise its corridor. -- The robot is in a position and orientation that fits in more corridors with out a direction from a route.(see also training autopilot) -- There is not a free passage. -- A combination. - -## Camera handling -The camera has a pan and tilt function which can be operated using the controller. - -![](img/controller/ps4_camera.jpg) - -* `Left joystick`: Move to change the camera's pan and tilt positions -* `South`: Move the camera to its preset home position -* `Left arrow` and `Right arrow` : Switch between the camera's.(Show large window of back-camera via option in top) -* `Left button`: Make picture with front camera and store it in the month-folder in /photos/cam0. -### Camera home setting -The cameras have a home setting. This can be changed. -Warning: For the front-camera, the home setting should be the position that makes the AI-models work. -> A slight mis-alignment may give unexpected behaviour of the robot. - -For changing the home setting: -- choose the camera by pressing the the right or left arrow. -- Set the camera in the new home-position with the left stick. -- Save the new home position by pressing the X-button(blue) 10 seconds, then keep this button pressed and pres three times the A-button(green). -- Test the new home positon by turning the camera with the left stick, press the A-button and see if the camera turn back to the new home-position. - - - -## Details on Teleoperation mode -Teleoperation has two sub modes: -* Forward -* Reverse - -![](img/controller/ps4_forward_backward.jpg) - -> There is no visual on screen indication of the sub mode. - -### Button functions for driving -* `East`
 - - if the engine it not running, start it. - - if moving, stop. -* `Left trigger` - - when pressed and stopped or moving forward, accelerate until maximum power.
 - - when pressed and moving backwards, engine braking.
 - - when released and moving forwards, roll to stop. -* `Right trigger` - - When pressed and moving forwards, engine braking - - When pressed and moving backwards, accelerate backwards until maximum power.
 - - When pressed fully in sub-mode forwards and not moving, nothing happens but switching to sub mode reverse.
 - - When pressed in sub-mode backwards and not moving, accelerate backwards until maximum power.
 - - When released
and moving backwards, roll to a stop. - - When released and moving forwards, roll to a stop. -* `Right joystick`
 - - move left/right, turn wheels.
 - - move up/down, not in use. - - -## Details on Autopilot mode -The robot can only drive if it recognises one, and only one, corridor.
 -If the robot can recognise one corridor then a maximum speed above zero is necessary to drive.
 -The autopilot chooses its speed based on the speed during training in combination with certainty of recognition of the -corridor and it never exceeds the maximum set speed. - -### Interventions - -The autopilot can be overridden by commands from the operator: -* Increase or decrease speed and steering.(If speed is overruled by the operator the autopilot releases the steering also.) -* Just steering commands - -> In autopilot mode it is not possible to drive backwards. - -### Button Functions - -#### Instructions - -![](img/controller/ps4_autopilot.jpg) - -* `East` - - wake up the controller and start the engine - - switch to teleoperation mode
 - - if the computer and/or controller are in standby, wake-up
 - - in other situations, set the max speed to zero, go to teleoperation mode -* `North`. When in teleoperation, press to: - - stop, set max speed to zero
 - - switch autopilot on
 - - show autopilot mode on screen
 - - without instruction, the autopilot does what it was taught to do at that location 'As learned' - - with instruction, show it on screen. -* `Arrow Up`. Press once briefly to:
 - - increase the maximum speed by 0,2 km/h
 - - show the max speed on screen 
 - Press and hold:
 - - the maximum speed increases by 0,2 km/h per 0,5 sec of pressing the button down
 - - show the max speed on screen -* `Arrow Down`. Press once briefly to: - - decrease the maximum speed by 0,2 km/h
 - - Show the max speed on screen
 - - Press and hold to
 - - decrease the maximum speed by 0,2 km/h per 0,5 sec of pressing the button down - - show the max speed on screen - -> Note that the autopilot currently supports the 'As learned' instruction and not the others. - -#### Operator interventions overrule the autopilot -Intervening in the instructions from the autopilot to the engines, without stopping, means also training the autopilot. -To avoid unwanted actions from the robot, delete the trainingsdata from the storage on the computer. - -![](img/controller/ps4_autopilot_overrule.jpg) - -* `Left trigger` - - increase speed up to the set autopilot max speed - - override steering instructions from autopilo(operator should steer). - - record images, instructions and values including but not limited to steering and throttle, for training purposes - Release to:
 - - leave the speed for the autopilot to determine
and stop recording -* `Right joystick`
 - - steer - - override speed instructions from autopilo(operator should give speed). - - record images, instructions and values including but not limited to steering and throttle, for training purposes - - up/down movements are ignored -* `East` - - Stop - -## Certificates -The W3C-community decide recently that Game-pads are only available within the secure-context(https). -The implementation off this decision is not very strict. Some browsers ask a certificate, others not. -There are no official certificates for private IP-no as used within the ZeroTier VPN's. -If your browser demand a certificate then tis is possible. -MWLC made self-signed certificates available via the webserver on the robot. -To use the certificate you have: -- Get the certificate -- Install and trust it - -###Get the certificate -Assure your Robot is available in the ZeroTier network. -With Firefox, in the Mac-environment, the following steps are taken: -Click the HTTPS-lock-icon before the URL. - -![](img/controller/cert_20lockiconclicked.jpg) - -Click the details-arrow behind 'Information not secure'. - -![](img/controller/cert_25connectionnotsecureddetailsclicked.jpg) - -Click 'More information'. - -![](img/controller/cert_30moreinformationclicked.jpg) - - -Click 'View certificate' and scroll down. - -![](img/controller/cert_35viewcertificateclicked.jpg) - - -Click 'PEM (cert)' to download the file and save it to your disk and remember the location. - -The certificate is now downloaded to your computer - -###Install and trust the certificate - -Go to the certificate and open it. - -![](img/controller/cert_40downloadcertificateclicked.jpg) - -The mac will ask to add the certificate to the computer for further usage. -Choose as keychain 'login' and click add. -It's installed. - -Go on with trusting it. -Go to your key-chain and double-click the certificate. The certificate is called 'rover.mwlc.ai'. - - -![](img/controller/cert_50certificateopeninkeychain.jpg) - - -Open the 'Trust'-section. - -![](img/controller/cert_55trustopen.jpg) - -Choose 'Always trust'. -If requested, accept this with your password. - - Go with Chrome to the robot and check is your controller works. - diff --git a/docs/service_architecture.md b/docs/service_architecture.md deleted file mode 100644 index fbcc813e..00000000 --- a/docs/service_architecture.md +++ /dev/null @@ -1,320 +0,0 @@ -# Service Architecture - -Smart Segment -A smart segment is part of the robot that houses computer systems inside that allow it to move autonomously. Connected segments make the robot act like a caterpillar. -This computer system includes a Raspberry Pi, a Jetson Nano, 2 cameras, a router and 2 motor controllers. - -## Hardware - -### AI Camera (camera0) - -- A smart segment uses Hikvision PTZ Dome Network camera for its AI Camera. - -- IP: 192.168.1.64 - -- **Input**: Footage from its surroundings - -- **Output**: Sends H264 encoded video ouput to the Pi’s Docker container service called “Stream0”. - -### Operator Camera (camera1) - -- The smart segment also uses a 2nd Hikvision PTZ Dome Network camera for an Operator Camera. - -- IP: 192.168.1.65 - -- **Input**: Footage from its surroundings - -- **Output**: Sends H264 encoded video output to the Pi’s Docker container service called "Stream1". - -### Raspberry Pi 4B - -- OS: balena-cloud-byodr- pi-raspberrypi4-64-2.99.27-v14.0.8 -- IP: 192.168.1.32 -- This OS allows it to communicate with Balena Cloud. Inside the Pi, there are 5 processes running, 4 of which run in their own separate Docker containers. - -### Nvidia Jetson Nano - -- OS: balena-cloud-byodr-nano-jetson-nano-2.88.4+rev1-v12.11.0 -- IP: 192.168.1.100 -- This OS allows it to communicate with Balena Cloud. Inside the Nano, there are 10 processes running, all of which run in their own separate Docker containers. - -### RUT-955 - -- IP: 192.168.1.1 -- The router inside the segment is called RUT955 from Teltonika. The router has LAN, WAN, 4G, 5G and LTE capabilities. It's ethernet connectivity is extended with a switch. The router is responsible for all internet connectivity between the segment and the rest of the Internet. -- This router also includes an internal relay that works as a switch that lets the battery power the rest of the segment. Only when the router is booted up and the relay switch closes, will the segment receive power to the rest of its internal components. - -### Motor Controller 1 - -- The segment uses the Mini FSESC6.7 from Flipsky. It is connected via USB to the ttyACM0 serial port of the Pi. -- **Input**: Commands from the Pi. -- **Output**: Sends power to its respective motor wheel in order to turn it according to its commands. - -### Motor Controller 2 - -- The segment uses the Mini FSESC6.7 from Flipsky. It is connected via USB to the ttyACM1 serial port of the Pi. -- **Input**: Commands from the Pi. -- **Output**: Sends power to its respective motor wheel in order to turn it according to its commands. - -## Software stack - -1. #### Balena - -From Balena, we use their Balena Cloud services, and also use BalenaOS on the Raspberry Pi and Jetson Nano, to make them compatible with Balena Cloud. From the Balena Cloud we can upload new versions of software, update segments OTA, reboot, connect via SSH, and manage segments remotely. - -2. #### Docker - -Docker is a platform for building, shipping, and running applications in containers. Containers are lightweight, portable, and self-sufficient units that contain all the necessary software, libraries, and dependencies to run an application. Docker enables developers to package their applications into containers, which can be easily deployed and run on any platform that supports Docker. With Docker, developers can ensure that their applications run consistently across different environments, from development to production. - -The BYODR project includes docker files that can be used to build a Docker image for each service as well as instructions on how to deploy the image onto a robot using Balena Cloud. By using this approach, users can ensure that the software stack is consistent and reproducible across multiple robots, and can easily deploy updates and manage their fleet of cars from a central location. - -3. #### Zerotier - -Zerotier is a "freemium" P2P (Peer to Peer) VPN service that allows devices with internet capabilities to securely connect to P2P virtual software-defined networks. - -The Pi has a Zerotier instance running inside it. This means that it is equipped to work with a Zerotier client that is running on our devices, so that we can add the Pi to our VPN network. - -Similarly to the Pi, the Nano also has the same functionality regarding Zerotier, although arguably more important here, since it allows the User to connect to the Nano, and by extension the Web server, via a secure zerotier network. - -4. #### Wireguard - -Similarly to Zerotier, Wireguard is also a VPN. The difference here is that Wireguard is used by the Nano in every network procedure it has to go through for safety. Since the Nano has plenty more processes that require a network connection, compared to the Pi, Wireguard is an extra layer of security against attacks. This process is running inside a docker container. - -**Q1**: Why do we use Wireguard if we have ZT? -**A1**: Since ZeroTier and WireGuard look similar, the project uses both ZeroTier and WireGuard for different purposes. ZeroTier is used to create a secure network connection between the robot and the user's computer, while WireGuard is used to encrypt the data that is transmitted over that connection. Together, these technologies provide a secure and reliable way for users to remotely control the robots. - -# The CPUs - -This section will explaining each part of the CPUs, how is it working and the service inside of it - -## Raspberry Pi docker services: - -### Stream0 - -**Input**: Receives video stream from the AI camera. - -**Function**: Creates a high quality H264 video output stream - -**Output**: Sends the stream via RTSP to the web server located in Teleop. - -**Q1**: Why does the Pi create the streams, and not just send them from the cameras directly to the nano, bypassing the Pi? - - - -### Stream1 - -**Input**: Receives video stream from the Operator camera. - -**Purpose**: Similarly to the process above, it creates a high quality H264 video output stream. - -**Output**: Sends the stream via RTSP to the web server located in Teleop. - -**Q1**: How does the AI get the images for itself? From the H264 stream, or someplace else? - -### Zerotier - -**Input**: Receives input from the user, using the built-in command line. - -**Function**: We can add the Pi to our VPN network. - -**Output**: The Pi can communicate with the software-defined virtual networks that the user has built, via the internet. - -**Q1**: Why does the Pi need the zerotier? - -### Servos - -**Input**: Receives commands in JSON format from Teleop, Inference, Pilot that request movement from the motors. - -**Function**: Sets up a JSON server that listens on `0.0.0.0:5555` for commands from other processes. Listening to `0.0.0.0` means listening from anywhere that has network access to this device. It also sets up a JSON Publisher so that this service can send JSON data to any services that are listening to this service. Decodes commands received from the other services are decoded and stored in a deque. - -This service also initiates an http server listening to port `9101` (default option). - -**Output**: The commands are sent to the motor controllers via the serial USB connection. - -**Q1**: BalenaCloud lists another service called "pigpiod". This service and the "servos" service both use the same image in their docker container. What does the pigpiod service do? - -**Q2**: Why does this service have a JSON publisher? Who does it send data to? - -### Battery Management System (BMS) [The only service that does not run in a docker container] - -**Input**: Receives data from the BMS inside the battery itself. - -**Function**: The Pi uses an I2C Pi Hat to communicate with the special battery that the segment uses. From here, the Pi can give a "pulse" to the battery in order to "reset" it. This system also allows for seamless use of 2 or more of the same battery, on the same segment. This process is exclusively hardware based, so it is not running in a container. - -**Output**: Sends data to the BMS inside the battery. - -## Jetson Nano docker services: - -### HTTPD - -**Input**: Listens for data requests from Teleop, Pilot, Stream1, Stream0. The sources are listed in the configuration file (called haproxy.conf) that the proxy server uses. - -**Function**: This service sets up a proxy server (Intermediate server between the client and the actual HTTP server) using HAProxy, for load balancing and request forwarding. - -**Output**: Forwards requests to the same services as above, but taking into account load balancing. The destinations are listed in the configuration files that the proxy server uses. - -### Inference - -**Input 1**: Receives stream from AI camera with the socket url being ``'ipc:///byodr/camera_0.sock'`` -**Input 2**: Receives routes from teleop with the socket url being ``'ipc:///byodr/teleop.sock'`` -**Input 3**: Receives timestamps from teleop with the socket url being ``'ipc:///byodr/teleop_c.sock'`` - -**Function**: This service is responsible for an interface for generating steering angles and making predictions based on images. These actions are based on a trained neural network model. If this service has input from Teleop, they override the self-driving directions of the model. -This service also initiates an IPC server with url ``'ipc:///byodr/inference_c.sock'``, and a JSON publisher with url ``'ipc:///byodr/inference.sock'`` - -**Output 1**: Sends the AI model's current state (predicted action, obstacle avoidance, penalties, and other navigation-related information) to the local JSON Publisher. -**Output 2**: Creates list with any errors from this service. The list will be send to the Teleop service upon request. -**Q1**: How does Inference, Pilot and Teleop work together, if they work together? -**Q2**: How does Inference send its data to the Pi for proper movement? - -### Zerotier - -**Input**: Receives input from the user, using the buildin command line. - -**Function**: The Nano can be added into a virtual network. - -**Output**: Can communicate securely with nodes of the same network. - -### WireGuard (Not used) - -**Input**: Receives data from the Nano and the Router. - -**Function**: Encrypts the data of the Nano. - -**Output**: The data send by the Nano towards the internet are encrypted. - -**Q**: Why do we use Wireguard if we have ZT? - -**A**: Since ZeroTier and WireGuard look similar, the project uses both ZeroTier and WireGuard for different purposes. ZeroTier is used to create a secure network connection between the robot and the user's computer, while WireGuard is used to encrypt the data that is transmitted over that connection. Together, these technologies provide a secure and reliable way for users to remotely control the robots. - -### Teleop - -**Input 1**: Receives stream from Stream0 (Camera0-AI) service of the Pi `'ipc:///byodr/camera_0.sock'`. - -**Input 2**: Receives stream from Stream1 (Camera1 - Operation) service of the Pi `'ipc:///byodr/camera_1.sock'`. - -**Input 3**: Receives data in a JSON format from the Pilot service, that includes the proper action for the segment to take, depending on various factors. Potential actions are: - -1. Switch to manual driving mode -2. Continue with the current manual mode commands that are being executed -3. Continue with the current autopilot mode commands that are being executed -4. Continue with the current backend(Carla simulation) mode commands that are being executed - -Note, those commands could be old, repeated or empty commands (do nothing) - -**Input 4**: Receives the current state (location, speed, timestamp and more) of the segment that is located in the Carla simulation, in a JSON format from the Vehicle service - -**Input 5**: Receives the AI model’s current state in a JSON format from the Inference service’s Output 1. - -**Input 6**: Receives input from the Operator's method of control. - -**Input 7**: Receives lists of errors and/or capabilities from the following services from respective sockets: - -- Pilot service via `ipc:///byodr/pilot_c.sock` -- Inference service via `ipc:///byodr/inference_c.sock` -- Vehicle service via `ipc:///byodr/vehicle_c.sock` - -**Function**: This service includes a web server that listens for inputs from multiple sources that are later used to move the robot. The key presses from the operator are registered and reflected upon the robot using this service. -This service includes a logger that logs information regarding the manual control of the robot. -In addition, there is a function in this server that encodes the streams from the cameras to MJPEG. -It also hosts the site design files necessary to draw the Web App. - -**Output 1**: Robot movement according to user's commands -**Output 2**: Live video feed on the web app -**Output 3**: MJPEG stream capability -**Output 4**: Logs and messages produced during operation are stored MongoDB. -**Output5**: Publishes the segment's selected route on its JSON Publisher. -**Output6**: Publishes a timestamp with a 'restart' command on its JSON Publisher - -**Q1**: How does "Teleop" translate user input into robot movement? -**Q2**: How does it communicate with the cameras, "Pilot", "Inference" and "Vehicle"? -**Q3**: What data does it receive and send to the Pilot, Vehicle and Inference services? -**Q4**: From where does it receive its navigation images? -**Q5**: What does it do with the images? - -### Vehicle - -**Input 1**: Receives a JSON from the Pilot service, that includes the proper action for the segment to take, depending on various factors. Potential actions are: - -- Switch to manual driving mode -- Continue with the current manual mode commands that are being executed -- Continue with the current autopilot mode commands that are being executed -- Continue with the current backend(Carla simulation) mode commands that are being executed - -**Input 2**: Receives the timestamp with a 'restart' command from Output 6 of Teleop. -**Input 3**: Receives a route from Teleop's Output 5 - -**Function**: This process sets up a server that connects to a CARLA simulator and communicates with it to control a self-driving car. Carla is an open-source simulator for autonomous driving research. It is used to simulate the robot’s behavior in a virtual environment. -**Output 1**: Publishes the current state (location, speed, timestamp and more) of the segment that is located in the Carla simulation, in the JSON Publisher. -**Output 2:** Creates list with any errors and capabilities of the simulated segment. The list will be send to the Teleop service upon request. - -**Q1**: Is this process exclusively used to send data to the CARLA simulator, and nothing else regarding the driving of the robot? -**Q2**: Where is the CARLA simulation hosted? in the headless high-power PC -**Q3**: What do the video streams created in the server do exactly? -This is meant as an abstraction to run byodr on different robot 'platforms', we also had different hardware platforms in the past, such as the EXR1 tank robot, or the Mule before that - - -### ROS Node - -**Input 1**: Receives a JSON from the Pilot service, that includes the proper action for the segment to take, depending on various factors. Potential actions are: - -- Switch to manual driving mode -- Continue with the current manual mode commands that are being executed -- Continue with the current autopilot mode commands that are being executed -- Continue with the current backend(Carla simulation) mode commands that are being executed - -**Input 2**: Receives the timestamp with a 'restart' command from Output 6 of Teleop -**Function**: This service defines a ROS2 node which connects to a teleop node and a pilot node, and switches the driving mode to Autopilot or Manual, depending on user input. It also sets a max speed for the segment. -**Output**: Sends ROS commands in JSON format to the Pilot service -**Q1**: Why exactly do we need this service? -**Q2**: Does the communication with other services imply the existence of multiple nodes? -**Q3**: Why does it publish json data only to the Pilot, and not to both Pilot and Teleop? -**Q4**: What is "m" and what does it include? - -### Pilot - -**Input 1**: Receives a route from Teleop's Output 5 -**Input 2**: Receives"m"from ROS node's Output 1 -**Input 3**: Receives the current state (location, speed, timestamp and more) of the segment that is located in the Carla simulation from Vehicle's Output 1. -**Input 4**: Receives the AI model's current state (predicted action, obstacle avoidance, penalties, and other navigation-related information) from Inference's Output 1 -**Input 5**: Receives the timestamp with a 'restart' command from Output 6 of Teleop -**Input 6: **Receives user input (????????????) - -**Function**: This process sets up a JSON publisher and a local IPC server to send data to other services that have JSON collectors. It also is responsible for controlling the segment’s autonomous movement by using a pre-trained AI model. -**Output 1** : Receives a JSON from the JSON Publisher, that includes the proper action for the segment to take, depending on various factors. Potential actions are: - -- Switch to manual driving mode -- Continue with the current manual mode commands that are being executed -- Continue with the current autopilot mode commands that are being executed -- Continue with the current backend(Carla simulation) mode commands that are being executed - -**Output 2**: Creates list with any errors from this service. The list will be send to the Teleop service upon request. -**Output 3**: Movement commands to the Pi. (???) - -**Q1**: This process is exclusively used by the robot for its autonomous driving? -**Q2**: How does this service cooperate with "Inference"? -**Q3**: Why does this service start an HTTP server? -**Q4**: What is an IPC chatter json receiver? -**Q5**: What is a local ipc server? (It uses _c in its name => c = chatter?) -**Q6**: How does this service send movement commands to the Pi? (If it does) - -### MongoDB - -**Input**: Receives data from the segment, and stores any and all logs produced by the other services (?) -**Function**: This service creates a default MongoDB user and starts a configurable MongoDB server on the local machine. -**Output**: Stores logs in its built in database -**Q1**: What does the DB store inside it? -**Q2**: How does the DB get the data that it stores? - -### FTPD - -**Input**: Receives the newly trained model from the training server. -**Function**: This service creates a Pure-FTPd Server with a predefined set of commands. This server is used to send its training data to the server, and similarly, receive the trained model from the server. -**Output**: Sends data to the AI training server with parameters for its specific training. -**Q1**: Is this the code that connects the Nano to the FileZilla FTP server? (Mentioned in the read the docs) -**Q2**: How does this ftp server store, send and receive data from the training server? - -# General questions - -**Q1**: How json receivers and publishers work? Because this is used to send data one to another. -**Q2**: If all segments are in a zerotier network, does any data sent between the segments encrypted? \ No newline at end of file diff --git a/docs/startup_manual.md b/docs/startup_manual.md deleted file mode 100644 index a1bfc5f3..00000000 --- a/docs/startup_manual.md +++ /dev/null @@ -1,199 +0,0 @@ -# Startup and maintenance - -## Summary - -The robot is accessible via a ZeroTier Virtual Private Network (VPN) which is setup to accommodate one or more robots managed by -the VPN-manager. -We setup a ZeroTier-VPN with the robot as a node. The VPN-manager has a mwlc.global account. Please refer to your credentials document. -The robot can be driven by one operator at a time, not more. - -### Preparation -Members of a ZeroTier network are referred to as ‘nodes’. Unless otherwise setup any authorised node can access any other -authorised node in the same network. - -#### ZeroTier client -Install [ZeroTier client](https://www.zerotier.com/download/) on your computer. -Check your NodeID. This is the ID assigned to your computer. The NodeID will be used in any networks joined. -Join the network with the ID in the credentials. - -#### ZeroTier management -Go to [my.zerotier.com](https://my.zerotier.com). -Login with the credentials you received from MWLC. -Go to `Networks`. -Authorise your computer on the network. You can identify your computer using its NodeID from the preparation step. -When successful your computer will receive a managed ip. -Notice the managed ip of the robot in the network. This is the ip to access the robot at. - -### Chrome -On your computer, install the Google Chrome browser. - - -### Start -Unpack the robot. -Turn the key and wait for several minutes for the startup procedure of the different devices to complete. -Plug the controller into your computer. -Open chrome and browse to the managed ip of the robot as shown in my.zerotier.com *network* information. -As an example assume the robot’s managed ip is 192.168.193.42. In the browser’s address bar type: - - http://192.168.193.42 - -### Store -1. Place the robot on a platform so that the wheels can turn freely. -1. Plug the charging cable into the charging socket and an internet connection into the WAN-port. -1. For support, it is desirable to place the robot in front of a mirror. - ---- - -## Introduction -You have or will receive a robot from MWLC. To get it operational, you have to make arrangements: - -* Check the contents of the package -* The emailed documentation -* Study the documentation -* Install software -* Prepare the robot - -The robots from MWLC can be managed and/or operated via internet. A robot runs webservices that can be accessed via a browser. -To ensure security, the robot is only accessible via a VPN. -Familiarise yourself with the documentation from ZeroTier. - -## Pack-list -Your package should include: - -* 1 robot * 2 keys with the number mentioned in your credentials list -* 1 Victron charger with a Rosenberger connector -* 1 PS4 controller -* 1 USB cable -##### Lifting -The robot should weight a little less than 25kg. -It should be able to lift the robot on the longitudinal beam at the top. Always check if the bolts are tight enough. -### E-mailed information -In your email you should have: - -* A reference to this pages -* A credentials list - - -### Credentials -In your credentials you should have: - -* RobotID -* 1 ZeroTier login email/ID -* 1 ZeroTier password -* 1 ZeroTier network ID -* The identification number of the ignition keys - -Please check if the information is complete - - -## Preparation -### Once - -#### In advance -Send your e-mail address to MWLC. -You will receive this instruction from MWLC. -Install the necessary software on your computer. - -1. ZeroTier client application from: [www.zerotier.com/download](https://www.zerotier.com/download/) -1. Google Chrome - -#### Received together with the Robot -Credentials - - -#### Connect your computer to VPN -Join your computer to the - MWLC provided - ZeroTier network ID with the option ‘Allow managed’. -Notice the NodeID of your computer and check in the VPN members section in [my.zerotier.com](https://my.zerotier.com) -whether the correct computer is authorised. - - -#### Login into network management -* Go to [my.zerotier.com/login](https://my.zerotier.com/login) -* Login with the email address from the credentials -* Authenticate with the password from the credentials -* Choose menu option `Networks` -* Go to the network with the networkID from the credentials -* Check the joined computers by nodeID. -* Authorise your computer as member. - -See also manual [‘Connections and ZeroTier’](zerotier_manual.md). - - - -#### Per robot - -![](img/startup/managed_ip.png) - -> Take note of the managed ip of the robot. - - -#### Information about usage -See manual [controller and browser](operator_manual.md). - - -### Per session - -#### Guard -Don’t drive the robot unattended. - -#### On and off -The robot is switched on and off with the ignition key. - - -#### Start Robot -* Take one of the ignition keys. Put it in the lock and turn it up. -* A blue led will light up. This is the 12vdc circuit. -* After a while some small leds on the router will light up, and after one or - two minutes more leds will light up. You will hear a soft rattling sound from the camera’s. -* Press `East` (see manual [controller and browser](operator_manual.md)), and the main engines will start with the sound of fans. -* Ask the guard to hit the stop button. The main motor should stop. - - -#### Prepare operator seat -Connect the controller to the computer via bluetooth or USB-cable. -Start google chrome. - -#### Start operation -Browse to the ip of the robot. -See manual [‘Controller and Browser’](operator_manual.md) for further instructions. - -#### Start training -See manual [‘Training Autopilot’](autopilot_manual.md) for further instructions. - - -## Maintenance - -#### Store -When the robot is used the system will try to update software via the cloud. If functionality is changed, a release memo -and updated manuals will be provided. -To ensure the updates are possible: -1. Place the robot on a platform so that the wheels can turn freely. -1. Plug the charging cable into the charging socket. -1. Plug an internet connection into the WAN-port. - - -#### Charge -The robot can be charged with a Victron Blue smart 24v/5A charger. -See [www.victronenergy.com/chargers/blue-smart-ip65-charger](https://www.victronenergy.com/chargers/blue-smart-ip65-charger). -The charger comes with an app with all relevant information. -At the back of the robot you will find a magnetic Rosenberger connector. The charge connector fits in one orientation only. - - -#### Clean -It's important to keep the transparent casing of the camera’s clean. -For maintenance of the chassis please follow the guidelines of Arrma. - -## Troubleshooting - -The robot is built with broadly general available parts. You can assemble it yourself but some companies prefer an -[assembled version](mwlc_order.md). -For more information contact MWLC for a parts list and [assembly instructions](mwlc_kit.md). - -#### Main engine does not start but all lights are on. -Please check the fuse for the chassis. - -#### Main engine does not stop after hitting the red button. -This is dangerous, don't use the robot. Contact us. - -### Connection issues -See manual [‘Connections and ZeroTier’](zerotier_manual.md). diff --git a/docs/zerotier_manual.md b/docs/zerotier_manual.md deleted file mode 100644 index 4c425a39..00000000 --- a/docs/zerotier_manual.md +++ /dev/null @@ -1,216 +0,0 @@ -# ZeroTier - -## Introduction -The robots from MWLC can be managed and/or operated via internet. -Although the main purpose of the robot is driving autonomously, the robot is heavily dependant on the internet connection with the operator. - -* The operator sends instructions to the robot. -* The robot sends status information and a video stream in return. -* Outside the operational window there are data exchanges and updates. - -The connection should be reliable, have enough bandwidth, as little delay as possible and must be secure. -Especially the video stream increases connection demands. Enough bandwidth is necessary and the delay must be minimal.
 -It is important that non-authorised parties are excluded from accessing the robot. -On the internet, connections hop via a great number of different parties. -The services of the different providers must be coordinated and in addition, there are constantly changing circumstances and requirements. -The robot runs webservices that can be accessed via a browser. To ensure security, the robot is only accessible via a -Virtual Private Network (VPN) from ZeroTier.
 -The operators have access to the robot via a ZeroTier VPN that is setup in advance for one or more robots that are managed by the -VPN-manager. -MWLC creates a ZeroTier-VPN with the robot as a node. The information to manage the VPN is available to the VPN-manager. -#### Warnings from your security-software -Certain controllers need a secured connection, https. A secured conection needs an certificate. -As the VPN is private it uses IP-numbers from Private-ranges like 192.nnn.nnn.nnn and 10.nnn.nnn.nnn. -For these ranges there are no official certificates form a certified-provider. -Therefore there a, so called, selfsigned-certicicate is used. -This might give warnings from you security-softeware and/or the browser. -Please, check-carefully if this warning is caused by the robot internet-server via the vpn. If so yo can connect to the robot. -## Connections of the robot -The robot has three types of methods to connect to the internet: - -* Wireless mobile such as 4g, 5g, LTE -* WiFi -* Ethernet - -The connections are managed by the RUT955 router. -See [teltonika-networks.com/product/rut955](https://teltonika-networks.com/product/rut955). - -The default connection is configured to use the WAN-port. The WAN-port of the router is connected to the WAN-connector at the back of -the robot. -This ethernet connection is very useful for software updates and data exchanges. -If the WAN-port isn’t used the router will connect via 4g. The WiFi-connections are not in use as they are not very reliable in the field -in combination with ZeroTier. -Beside this there is a LAN-connection that creates the possibility to troubleshoot the robot. - -## ZeroTier -ZeroTier is an open source peer-to-peer VPN, without a central server. The usage of ZeroTier is well described in its documentation. -A VPN from ZeroTier is a network between devices, called nodes. Nodes can be robots or end-user computers. A computer node can have -different users, this is not noticed by the VPN. It is the task of the VPN-manager to decide which nodes are granted access. -A ZeroTier VPN has an administrator, the VPN-manager. -Networks are configured with the following parties: - -* A VPN-manager using the ZeroTier website -* One or more robots. One robot can have more nodes on board that all use the VPN. -* Zero or more end-user computers or operators. - -> If the VPN-manager decides to operate a robot, he or she has to use a computer which is part of the network. -> When a VPN-manager is responsible for multiple robots, all of them can be made accessible using the same network. - -### Procedure - -MWLC creates a ZeroTier-account with an MWLC-email account and initiates a network. -Cloud-management of MWLC adds robots to this network. -The VPN-manager is responsible for the computers in the network. If a computer has acces to the network, any user of the computer could -in principal operate the robot. It is important not to have more computers in the network than strictly necessary. -MWLC sets up the ZeroTier network and adds the robots as nodes. The information is handed over to a VPN-manager to add computers, -as nodes, for the operators.
 -In the credentials document you will find the information to take over management of your network. -You could, naturally, also setup your own ZeroTier networks. -Most necessary functions are describe in this document but please be aware of any changes by ZeroTier itself. - -## ZeroTier Hub - -### Log in - -Go to [my.zerotier.com/login](https://my.zerotier.com/login) - -![](img/zerotier/login_to_zerotier.png) - -`Click the button to proceed` - - -![](img/zerotier/login_email_password.png) - -`Email: see credentials` -`Password: See credentials` -`Click ‘Log In’` - -![](img/zerotier/hub_menu.png) - -`Click ‘Networks’` - -![](img/zerotier/your_networks.png) - -`Choose the network from your credentials.` -`If it isn’t available or you see other networks, contact MWLC.` - -![](img/zerotier/hub_overview_small.png) - -`You will see the Network screen. This screen has four sections.` -`Sections can be closed using the blue arrows.` -`Please open the ‘Members’ section.` - -## Add a computer -Adding computers is done in three steps. -1. The VPN-manager invites the computer-owner to install the ZeroTier-client and send back the nodeID.
 - Optionally the VPN-manger can also send the NetworkID. -1. The computer owner: - * installs the ZeroTier-client on the computer
 - * instructs the application to access to the network and
 - * sends the VPN-manager the node ID as mentioned in the application -1. The VPN-manager authorises the computer. - - -### Ad 1. The invitation -There are no special requirements to the invitation. At the bottom of the 'members' section there is a little tool. -By just filling in the email-address the proposed candidate gets all necessary information. - -![](img/zerotier/manually_add_member.png) - -`If the VPN-manager already knows the nodeID, its possible to use ‘MANUALLY ADD MEMBER’ -and add the node directly to the network.` - -### Ad 2. Create a new Node -To add a computer to the network, the computer has to run the ZeroTier-client. -Install the application from ZeroTier: [www.zerotier.com/download](https://www.zerotier.com/download) - -![](img/zerotier/client_application_menu.png) - -`Your computer gets a ‘Node ID’` -`In the client choose ‘Join network’` - -![](img/zerotier/enter_network_id.png) - -`Use the networkID received from the VPN-manager` -`Click ‘Join’` - -The network will appear in the list on the computer but before having access the VPN-manager must authorise it. -Inform the VPN-manager of your nodeID so he or she can check if it appears in the network. This can take minutes, up to hours, depending -on the complexity of the internet routes between your node and the other nodes. -Keep in mind that changing your IP number on the internet wil be seen as a trigger to recalculate the managed routes. - -### Ad 3. Authorisation of the computer by the VPN-manager -The moment the computer has joined the network it can be authorised. -This authorisation has to be done by the VPN-manager - -Log in into ZeroTier and go to the network via the ‘Networks’ option in the top menu and click on the correct networkID. -The network-management screen appears. - -![](img/zerotier/network_id_and_name.png) - -`Close the ‘Settings’ by clicking the blue arrow.` - -The ‘Members’ section for managing network nodes appears. - -![](img/zerotier/hub_network_members.png) - -`Check if you recognise the nodeID in the column ‘Address’.` -`If this is a familiar node you can authorise it in the column ‘Auth?’.` -`After authorisation the node will get a managed IP.` - -### Robots in the network -In this overview you also see the robots in your network. MWLC is responsible for making a robot accessible in the correct network. - -#### Managed / physical ip's - -The VPN client makes the computers and robots part of a closed, virtual IP-network. All nodes have two types of IP: - -* One to connect the computer to the internet.
 - The physical connection to the internet. E.g. the 4g-router from the robot gets is IPno from the SIM-card.
 - If the computer changes its physical internet connection, such as switching from 4g to a Lan then this number is changed.
 - At that moment the ZeroTier connections will be restructured to find the most efficient route. - During this restructuring the computer won’t be connected to the VPN. This can take some time. -* One to connect the computer to the virtual private network, the managed IPno. 
 - This number can be changed by the user but does not affect the connection time. - -#### Automatic optimisation of the VPN -The ZeroTier infrastructure calculates the most optimal route between the nodes of the network. This is necessary to avoid delay in the -video-stream. If somewhere in the route changes are made a recalculation can be triggered. -If a computer is connected to the ZeroTier VPN a request will be first resolved in the VPN before going outside to the internet. - ---- - -### Connection issues - -#### Primary checks -* On the client, is there another VPN running? -* On the client, are there any other browsers or Tabs running with a connection to the Robot? -* On the client, does the application use the correct network ID? -* On the client, does the client have internet access? -* On the ZeroTier network members section, does the client appear in the network as ‘ONLINE'? -* On the ZeroTier network members section, is the client authorised? - -#### Changing ip / computer wake up / network isp switch -If a node changes its physical connection, 4g,cable or WiFi, it can take several minutes to recalculate the routes. -When a computer wakes up from its sleep mode this could trigger a recalculation of the network. -Some more advanced routers can switch isp to improve speed or provide for failover scenario’s. In case of a ZeroTier network this can be -contra-productive. Most of the time this also means changing the IPno and therefor recalculation of the routes. - -On the client, in order to improve the connection set-up speed it might help to initiate this manually. -On a Mac open a terminal and issue: - - sudo launchctl unload /Library/LaunchDaemons/com.zerotier.one.plist &&\ - sudo launchctl load /Library/LaunchDaemons/com.zerotier.one.plist - -#### Changing the MTU -In certain cases the network interface maximum transmission unit (MTU) may be too large. -You can inspect the mtu of an interface in the terminal with: - - ifconfig - -Set the mtu using the network interface name. - - sudo ifconfig mtu 1200 - ->This change is temporary and does not survive restarts. ->Please refer to your operating system to make the change permanent, if possible. \ No newline at end of file diff --git a/ftpd/Dockerfile b/ftpd/Dockerfile index 6ba27fd0..c31c555f 100644 --- a/ftpd/Dockerfile +++ b/ftpd/Dockerfile @@ -1,28 +1,24 @@ -# https://hub.docker.com/r/gists/pure-ftpd/dockerfile +# Using an image as base FROM gists/pure-ftpd:1.0.49 ENV PYTHONUNBUFFERED=1 RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python ENV PUBLIC_HOST=localhost \ - MIN_PASV_PORT=30000 \ - MAX_PASV_PORT=30009 \ - UID=1000 \ - GID=1000 + MIN_PASV_PORT=30000 \ + MAX_PASV_PORT=30009 \ + UID=1000 \ + GID=1000 VOLUME /home/ftpuser /etc/pureftpd EXPOSE 21 $MIN_PASV_PORT-$MAX_PASV_PORT -COPY ./ftpd app/ +COPY ./ftpd /app/ WORKDIR /app -ENTRYPOINT ["/usr/bin/entrypoint.sh"] +# Make sure the script is executable +RUN chmod +x /app/create_user.sh -CMD /usr/sbin/pure-ftpd \ - -P $PUBLIC_HOST \ - -p $MIN_PASV_PORT:$MAX_PASV_PORT \ - -l puredb:/etc/pureftpd/pureftpd.pdb \ - -E \ - -j \ - -R \ No newline at end of file +ENTRYPOINT ["/usr/bin/entrypoint.sh"] +CMD ["/usr/sbin/pure-ftpd", "-P", "$PUBLIC_HOST", "-p", "$MIN_PASV_PORT:$MAX_PASV_PORT", "-l", "puredb:/etc/pureftpd/pureftpd.pdb", "-E", "-j", "-R"] diff --git a/ftpd/create_user.sh b/ftpd/create_user.sh index 1c3d2408..8c85303f 100755 --- a/ftpd/create_user.sh +++ b/ftpd/create_user.sh @@ -1,2 +1,10 @@ #!/bin/sh -(echo rob; echo rob) | pure-pw useradd rob -m -u ftpuser -d /home/ftpuser \ No newline at end of file + +# Check if the user already exists +if ! pure-pw show rob > /dev/null 2>&1; then + # Adding the user silently + (echo rob; echo rob) | pure-pw useradd rob -m -u ftpuser -d /home/ftpuser > /dev/null 2>&1 + echo "User 'rob' created." +else + echo "User 'rob' already exists. No need to create." +fi diff --git a/ftpd/wrap.py b/ftpd/wrap.py index ea513a14..01f4ee43 100644 --- a/ftpd/wrap.py +++ b/ftpd/wrap.py @@ -7,20 +7,37 @@ logger = logging.getLogger(__name__) -log_format = "%(levelname)s: %(asctime)s %(filename)s %(funcName)s %(message)s" - -_MAIN_COMMAND = ["/usr/sbin/pure-ftpd", "-P", "localhost", "-p", "30000:30009", "-l", "puredb:/etc/pureftpd/pureftpd.pdb", "-E", "-j", "-R"] - +log_format = '%(levelname)s: %(asctime)s %(filename)s %(funcName)s %(message)s' + +_MAIN_COMMAND = ['/usr/sbin/pure-ftpd', + '-P', + 'localhost', + '-p', + '30000:30009', + '-l', + 'puredb:/etc/pureftpd/pureftpd.pdb', + '-E', + '-j', + '-R'] def main(): - pw_file = os.path.join(os.path.sep, "etc", "pureftpd", "pureftpd.passwd") + pw_file = os.path.join(os.path.sep, 'etc', 'pureftpd', 'pureftpd.passwd') if not os.path.exists(pw_file): - subprocess.call(["/app/create_user.sh"]) - logger.info("Created the default ftp user.") - subprocess.call(_MAIN_COMMAND) - + try: + subprocess.check_call(['/app/create_user.sh']) + logger.info("Created the default ftp user.") + except subprocess.CalledProcessError as e: + logger.error("Failed to create user: {}".format(e)) + except PermissionError as e: + logger.error("Permission denied: {}".format(e)) + try: + subprocess.check_call(_MAIN_COMMAND) + except subprocess.CalledProcessError as e: + logger.error("Failed to start FTP service: {}".format(e)) + except PermissionError as e: + logger.error("Permission denied: {}".format(e)) if __name__ == "__main__": - logging.basicConfig(format=log_format, datefmt="%Y%m%d:%H:%M:%S %p %Z") + logging.basicConfig(format=log_format, datefmt='%Y%m%d:%H:%M:%S %p %Z') logging.getLogger().setLevel(logging.INFO) main() diff --git a/httpd/Dockerfile b/httpd/Dockerfile deleted file mode 100644 index f4884be8..00000000 --- a/httpd/Dockerfile +++ /dev/null @@ -1,19 +0,0 @@ -FROM centipede2donald/ubuntu-bionic:python27-opencv32-gstreamer10 -#The service is built on top of Ubuntu Bionic with Python 2.7, OpenCV 3.2, and GStreamer 1.0 pre-installed - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install -y --no-install-recommends \ - haproxy \ - npm \ - && apt-get -y clean && rm -rf /var/lib/apt/lists/* - -RUN npm install -g mapport - -#The entry point for the Docker container is set to run haproxy with a configuration file /app/haproxy.conf. -#This means that when the container starts, it will initiate HAProxy with the given configuration in haproxy.conf. - -COPY ./httpd app/ -WORKDIR /app - -CMD ["/usr/sbin/haproxy", "-f", "/app/haproxy.conf"] \ No newline at end of file diff --git a/httpd/certs/lan.conf b/httpd/certs/lan.conf deleted file mode 100644 index f6fe94ac..00000000 --- a/httpd/certs/lan.conf +++ /dev/null @@ -1,41 +0,0 @@ -[req] -default_bits = 2048 -distinguished_name = req_distinguished_name -req_extensions = req_ext -x509_extensions = v3_req -prompt = no - -[req_distinguished_name] -countryName = NL -stateOrProvinceName = Utrecht -localityName = Utrecht -organizationName = MWLC -commonName = rover.mwlc.ai - -[req_ext] -subjectAltName = @alt_names - -[v3_req] -subjectAltName = @alt_names - -[alt_names] -IP.1 = 192.168.193.10 -IP.2 = 192.168.193.11 -IP.3 = 192.168.193.12 -IP.4 = 192.168.193.13 -IP.5 = 192.168.193.14 -IP.6 = 192.168.193.15 -IP.7 = 192.168.193.16 -IP.8 = 192.168.193.17 -IP.9 = 192.168.193.18 -IP.10 = 192.168.193.19 -IP.11 = 10.147.17.10 -IP.12 = 10.147.17.11 -IP.13 = 10.147.17.12 -IP.14 = 10.147.17.13 -IP.15 = 10.147.17.14 -IP.16 = 10.147.17.15 -IP.17 = 10.147.17.16 -IP.18 = 10.147.17.17 -IP.19 = 10.147.17.18 -IP.20 = 10.147.17.19 diff --git a/httpd/certs/readme.txt b/httpd/certs/readme.txt deleted file mode 100644 index 2fbdb8bb..00000000 --- a/httpd/certs/readme.txt +++ /dev/null @@ -1,8 +0,0 @@ - -openssl req -x509 -nodes -days 730 -newkey rsa:2048 -keyout key.pem -out mwlc.pem -config lan.cnf -sudo bash -c 'cat key.pem mwlc.pem >> cert.pem' - - -Inspect with: -openssl x509 -in cert.pem -text -noout - diff --git a/httpd/haproxy.template b/httpd/haproxy.template deleted file mode 100644 index 7d046713..00000000 --- a/httpd/haproxy.template +++ /dev/null @@ -1,38 +0,0 @@ -# version 0.66.0 -global - tune.ssl.default-dh-param 2048 - -defaults - timeout connect 10s - timeout client 1m - timeout server 1m - -frontend localhost - bind *:80 - bind *:9001 - bind *:9002 - mode http - option forwardfor - default_backend main - acl is_stream1 dst_port 9001 - acl is_stream2 dst_port 9002 - acl is_pilot_api path_beg /teleop/pilot - use_backend stream1 if is_stream1 - use_backend stream2 if is_stream2 - use_backend pilot if is_pilot_api - -backend main - mode http - server rover 127.0.0.1:8080 check - -backend pilot - mode http - server rover 127.0.0.1:8082 check - -backend stream1 - mode http - server rover 192.168.1.32:9101 check - -backend stream2 - mode http - server rover 192.168.1.32:9102 check diff --git a/httpd/haproxy_ssl.template b/httpd/haproxy_ssl.template deleted file mode 100644 index 22f93130..00000000 --- a/httpd/haproxy_ssl.template +++ /dev/null @@ -1,40 +0,0 @@ -# version 0.66.0 -global - tune.ssl.default-dh-param 2048 - -defaults - timeout connect 10s - timeout client 1m - timeout server 1m - -frontend localhost - bind *:80 - bind *:443 ssl crt /config/certs/cert.pem - bind *:9001 ssl crt /config/certs/cert.pem - bind *:9002 ssl crt /config/certs/cert.pem - redirect scheme https if !{ ssl_fc } - mode http - option forwardfor - default_backend main - acl is_stream1 dst_port 9001 - acl is_stream2 dst_port 9002 - acl is_pilot_api path_beg /teleop/pilot - use_backend stream1 if is_stream1 - use_backend stream2 if is_stream2 - use_backend pilot if is_pilot_api - -backend main - mode http - server rover 127.0.0.1:8080 check - -backend pilot - mode http - server rover 127.0.0.1:8082 check - -backend stream1 - mode http - server rover 192.168.1.32:9101 check - -backend stream2 - mode http - server rover 192.168.1.32:9102 check diff --git a/httpd/wrap.py b/httpd/wrap.py deleted file mode 100644 index 0d1b0ec8..00000000 --- a/httpd/wrap.py +++ /dev/null @@ -1,49 +0,0 @@ -#!/usr/bin/env python -from __future__ import absolute_import - -import argparse -import logging -import os -import re -import shutil -import subprocess - -logger = logging.getLogger(__name__) - -log_format = "%(levelname)s: %(asctime)s %(filename)s %(funcName)s %(message)s" - - -def _check_config(config_file): - with open(config_file, "r") as _file: - contents = _file.read() - if "version 0.66.0" in contents: - logger.info("The proxy configuration is up to date.") - else: - # Not all routers are at the default ip. - _ip = re.findall("rover.*:9101", contents)[0][6:-5] - _ssl = "ssl crt" in contents - # haproxy is a load balancer - with open("haproxy_ssl.template" if _ssl else "haproxy.template", "r") as _template: - with open(config_file, "w") as _file: - _file.write(_template.read().replace("192.168.1.32", _ip)) - logger.info("Updated the existing proxy configuration using ip '{}'.".format(_ip)) - - -def main(): - parser = argparse.ArgumentParser(description="Http proxy server.") - parser.add_argument("--config", type=str, default="/config/haproxy.conf", help="Configuration file.") - args = parser.parse_args() - - config_file = args.config - if os.path.exists(config_file): - _check_config(config_file) - else: - shutil.copyfile("haproxy.template", config_file) - logger.info("Created a new non ssl proxy configuration.") - subprocess.call(["/usr/sbin/haproxy", "-f", config_file]) - - -if __name__ == "__main__": - logging.basicConfig(format=log_format, datefmt="%Y%m%d:%H:%M:%S %p %Z") - logging.getLogger().setLevel(logging.INFO) - main() diff --git a/inference/archive/tf113-cp27-jp43.dockerfile b/inference/archive/tf113-cp27-jp43.dockerfile deleted file mode 100644 index 0caeec3b..00000000 --- a/inference/archive/tf113-cp27-jp43.dockerfile +++ /dev/null @@ -1,20 +0,0 @@ -FROM centipede2donald/nvidia-jetson:jp43-cp27-tf113-2 - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install -y --no-install-recommends \ - python-scipy \ - && apt-get -y clean && rm -rf /var/lib/apt/lists/* - -RUN pip install "Equation==1.2.1" - -COPY ./common common/ -COPY ./inference app/ -WORKDIR /app - -COPY ./build/*.pb /models/ -COPY ./build/*.ini /models/ - -ENV PYTHONPATH "${PYTHONPATH}:/common" - -CMD ["python", "app.py"] \ No newline at end of file diff --git a/inference/archive/tf113-cp27-x86.dockerfile b/inference/archive/tf113-cp27-x86.dockerfile deleted file mode 100644 index 31169abf..00000000 --- a/inference/archive/tf113-cp27-x86.dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -FROM tensorflow/tensorflow:1.13.2-gpu - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install -y --no-install-recommends \ - python-opencv \ - python-pip \ - python-scipy \ - python-setuptools \ - python-zmq \ - && apt-get -y clean && rm -rf /var/lib/apt/lists/* - -RUN pip install "jsoncomment==0.3.3" && \ - pip install "Equation==1.2.1" && \ - pip install "pytest==4.6.11" - -COPY ./common common/ -COPY ./inference app/ -WORKDIR /app - -COPY ./build/*.pb /models/ -COPY ./build/*.ini /models/ - -ENV PYTHONPATH "${PYTHONPATH}:/common" - -CMD ["python", "app.py"] \ No newline at end of file diff --git a/inference/archive/tf115-cp36-jp42.dockerfile b/inference/archive/tf115-cp36-jp42.dockerfile deleted file mode 100644 index f9bc7d0f..00000000 --- a/inference/archive/tf115-cp36-jp42.dockerfile +++ /dev/null @@ -1,12 +0,0 @@ -FROM centipede2donald/nvidia-jetson:jp42-nano-cp36-tensorrt516-tensorflow115-opencv440 - -#RUN pip3 install "pfilter==0.2.2" - -COPY ./common common/ -COPY ./inference app/ -WORKDIR /app - -COPY ./build/*.pb /models/ -COPY ./build/*.ini /models/ - -ENV PYTHONPATH "${PYTHONPATH}:/common" diff --git a/inference/archive/tf115-cp36-x86.dockerfile b/inference/archive/tf115-cp36-x86.dockerfile deleted file mode 100644 index 44d98739..00000000 --- a/inference/archive/tf115-cp36-x86.dockerfile +++ /dev/null @@ -1,30 +0,0 @@ -#FROM tensorflow/tensorflow:1.15.0-gpu-py3 -#Tensorflow 1.15.4 -FROM nvcr.io/nvidia/tensorflow:20.11-tf1-py3 - -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install -y --no-install-recommends \ - python3-pip \ - python3-setuptools \ - python3-zmq \ - libgl1-mesa-glx \ - && apt-get -y clean && rm -rf /var/lib/apt/lists/* - -RUN pip3 install "opencv-python >=4.4.0.42,<4.5.0" && \ - pip3 install "numpy<1.19.0,>=1.16.0" && \ - pip3 install "scipy>=1.4.1,<1.5" && \ - pip3 install "jsoncomment==0.3.3" && \ - pip3 install "Equation==1.2.1" && \ - pip3 install "pytest==4.6.11" - -#RUN pip3 install "pfilter==0.2.2" - -COPY ./common common/ -COPY ./inference app/ -WORKDIR /app - -COPY ./build/*.pb /models/ -COPY ./build/*.ini /models/ - -ENV PYTHONPATH "${PYTHONPATH}:/common" diff --git a/inference/inference/app.py b/inference/inference/app.py index e05bc423..ad912ba0 100644 --- a/inference/inference/app.py +++ b/inference/inference/app.py @@ -224,11 +224,6 @@ def quit(self): self._network.deactivate() -def _norm_scale(v, min_=0.0, max_=1.0): - """Zero values below the minimum but let values larger than the maximum be scaled up.""" - return abs(max(0.0, v - min_) / (max_ - min_)) - - def _null_expression(*args): logger.warning("Null expression used on args '{}'".format(args)) return 100 @@ -271,6 +266,7 @@ def internal_quit(self, restarting=False): def internal_start(self, **kwargs): _errors = [] self._gpu_id = parse_option("gpu.id", int, 0, _errors, **kwargs) + # Maximum speed it can go to, is 26 fps self._process_frequency = parse_option("clock.hz", int, 20, _errors, **kwargs) self._steering_scale_left = parse_option("driver.dnn.steering.scale.left", lambda x: abs(float(x)), -1, _errors, **kwargs) self._steering_scale_right = parse_option("driver.dnn.steering.scale.right", float, 1, _errors, **kwargs) @@ -354,7 +350,6 @@ def _config(self): _override = os.path.join(self._config_dir, "inference") [parser.read(_f) for _f in self._glob(self._internal_models, "*.ini") + self._glob(_override, "*.ini")] cfg = dict(parser.items("inference")) if parser.has_section("inference") else {} - logger.info(cfg) return cfg def get_process_frequency(self): @@ -372,13 +367,6 @@ def setup(self): def finish(self): self._runner.quit() - # def run(self): - # from byodr.utils import Profiler - # profiler = Profiler() - # with profiler(): - # super(InferenceApplication, self).run() - # profiler.dump_stats('/config/inference.stats') - def step(self): # Leave the state as is on empty teleop state. c_teleop = self.teleop() @@ -388,6 +376,7 @@ def step(self): c_route = None if c_teleop is None else c_teleop.get("navigator").get("route") state = self._runner.forward(image=image, route=c_route) state["_fps"] = self.get_actual_hz() + # print(state) self.publisher.publish(state) chat = self.ipc_chatter() if chat is not None: diff --git a/inference/inference/torched.py b/inference/inference/torched.py index 2b3ab1af..e5ac175c 100644 --- a/inference/inference/torched.py +++ b/inference/inference/torched.py @@ -41,21 +41,6 @@ def _newest_file(paths, pattern): return match -# def _maneuver_index(turn='general.fallback'): -# _options = {'general.fallback': 0, 'intersection.left': 1, 'intersection.ahead': 2, 'intersection.right': 3} -# return _options[turn] -# -# -# def _index_maneuver(index=0): -# _options = {0: 'general.fallback', 1: 'intersection.left', 2: 'intersection.ahead', 3: 'intersection.right'} -# return _options[index] -# -# -# def maneuver_intention(turn='general.fallback', dtype=np.float32): -# command = np.zeros(4, dtype=dtype) -# command[_maneuver_index(turn=turn)] = 1 -# return command - class TRTDriver(object): def __init__(self, user_directory, internal_directory, gpu_id=0, runtime_compilation=1): diff --git a/inference/pytest.ini b/inference/pytest.ini deleted file mode 100644 index a8fd8315..00000000 --- a/inference/pytest.ini +++ /dev/null @@ -1,5 +0,0 @@ -[pytest] -cache_dir = /pytest_cache -filterwarnings = - ignore:.*Call to deprecated create function:DeprecationWarning - ignore:.*Conversion of the second argument:FutureWarning diff --git a/inference/runtime-cp36-x86.dockerfile b/inference/runtime-cp36-x86.dockerfile deleted file mode 100644 index bd948d93..00000000 --- a/inference/runtime-cp36-x86.dockerfile +++ /dev/null @@ -1,36 +0,0 @@ -# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_20-10.html#rel_20-11 -FROM nvcr.io/nvidia/pytorch:20.11-py3 - -#>>> np.__version__ #'1.19.2' -#>>> cv2.__version__ #'3.4.1' -#>>> torchvision.__version__ #'0.8.0a0' -#>>> torch.__version__ #'1.8.0a0+17f8c32' -#>>> tensorrt.__version__ #'7.2.1.6' -#>>> onnx.__version__ #'1.7.0' -#>>> scipy.__version__ #'1.5.2' -#>>> pytest.__version__ #'6.1.2' -ENV DEBIAN_FRONTEND noninteractive - -RUN apt-get update && apt-get install -y --no-install-recommends \ - python3-zmq \ - && apt-get -y clean && rm -rf /var/lib/apt/lists/* - -# ---------------------------------------------------------- -# OpenCV 4.1.1 -# ---------------------------------------------------------- -RUN pip3 install "opencv-python >=4.1.1,<4.1.2" && \ - pip3 install "scipy==1.5.2" && \ - pip3 install "jsoncomment==0.3.3" && \ - pip3 install "Equation==1.2.1" && \ - pip3 install "pytest==6.1.2" - -RUN pip3 install "onnxruntime-gpu==1.7.0" - -COPY ./common common/ -COPY ./inference app/ -WORKDIR /app - -COPY ./build/*.onnx /models/ -COPY ./build/*.ini /models/ - -ENV PYTHONPATH "${PYTHONPATH}:/common" diff --git a/mkdocs.yml b/mkdocs.yml deleted file mode 100644 index 356e751b..00000000 --- a/mkdocs.yml +++ /dev/null @@ -1,14 +0,0 @@ -site_name: Byodr - Docs -nav: - - Home: index.md - - Startup and maintenance: startup_manual.md - - ZeroTier VPN: zerotier_manual.md - - Controller and Browser: operator_manual.md - - Training Autopilot: autopilot_manual.md - - Interfaces: interfaces_manual.md - - Assembly kit: mwlc_kit.md - - Assembly schemes: mwlc_as_schemes.md - - Assembly documentation: mwlc_assembly.md - - Order: mwlc_order.md - - About: about.md -theme: readthedocs \ No newline at end of file diff --git a/pilot/pilot/app.py b/pilot/pilot/app.py index ef491b36..d9add6a9 100644 --- a/pilot/pilot/app.py +++ b/pilot/pilot/app.py @@ -37,8 +37,8 @@ def _interrupt(): class PilotApplication(Application): - def __init__(self, event, processor, relay, config_dir=os.getcwd(), hz=100): - super(PilotApplication, self).__init__(quit_event=event, run_hz=hz) + def __init__(self, event, processor, relay, config_dir=os.getcwd()): + super(PilotApplication, self).__init__(quit_event=event) self._config_dir = config_dir self._processor = processor self._monitor = None diff --git a/pilot/pilot/core.py b/pilot/pilot/core.py index 8a140b39..28e81e09 100644 --- a/pilot/pilot/core.py +++ b/pilot/pilot/core.py @@ -826,8 +826,8 @@ def internal_quit(self, restarting=False): def internal_start(self, **kwargs): _errors = [] self._driver.restart(**kwargs) - self._process_frequency = parse_option("clock.hz", int, 50, _errors, **kwargs) - self._patience_micro = parse_option("patience.ms", int, 100, _errors, **kwargs) * 1000.0 + self._process_frequency = parse_option('clock.hz', int, 80, _errors, **kwargs) + self._patience_micro = parse_option('patience.ms', int, 200, _errors, **kwargs) * 1000. self._button_north_ctl = parse_option("controller.button.north.mode", str, "driver_mode.inference.dnn", _errors, **kwargs) self._button_west_ctl = parse_option("controller.button.west.mode", str, "ignore", _errors, **kwargs) # Avoid processing the same command more than once. @@ -890,11 +890,11 @@ def _process(self, c_teleop, c_ros, c_inference): self._cache_safe("decrease cruise speed", lambda: self._driver.decrease_cruise_speed()) def _unpack_commands(self, teleop, ros, vehicle, inference): - _ts = timestamp() + _patience, _ts = self._patience_micro, timestamp() # Check if the incoming data is within the patience time limit. - # print("Received teleop data:", teleop) - - teleop = teleop if teleop is not None else None + teleop = teleop if teleop is not None and (_ts - teleop.get('time') < _patience) else None + vehicle = vehicle if vehicle is not None and (_ts - vehicle.get('time') < _patience) else None + inference = inference if inference is not None and (_ts - inference.get('time') < _patience) else None vehicle = vehicle if vehicle is not None and (_ts - vehicle.get("time", 0) < self._patience_micro) else None inference = inference if inference is not None and (_ts - inference.get("time", 0) < self._patience_micro) else None # ROS commands are processed each time as specified in your existing logic. diff --git a/pilot/pilot/relay.py b/pilot/pilot/relay.py index 60b063dd..d6b65558 100644 --- a/pilot/pilot/relay.py +++ b/pilot/pilot/relay.py @@ -84,7 +84,7 @@ def __init__(self, relay, client_factory=None, status_factory=None, config_dir=o super(RealMonitoringRelay, self).__init__() self._relay = relay self._config_dir = config_dir - self._integrity = MessageStreamProtocol(max_age_ms=200, max_delay_ms=250) + self._integrity = MessageStreamProtocol() self._status_factory = StatusReceiverThreadFactory() if status_factory is None else status_factory self._client_factory = PiClientFactory() if client_factory is None else client_factory self._relay_closed_calltrace = collections.deque(maxlen=1) diff --git a/pyproject.toml b/pyproject.toml index 09f4b4a6..dadd56d3 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,2 +1,2 @@ [tool.black] -line-length = 222 +line-length = 222 \ No newline at end of file diff --git a/raspi/docker-compose.yml b/raspi/docker-compose.yml deleted file mode 100644 index e5b4f1b3..00000000 --- a/raspi/docker-compose.yml +++ /dev/null @@ -1,70 +0,0 @@ -version: '2' -volumes: - volume_zerotier_config: - volume_local_config: -services: - pigpiod: - image: centipede2donald/raspbian-stretch:pigpio-zmq-byodr-0.25.0 - container_name: pigpiod - privileged: true - user: root - restart: always - network_mode: "host" - command: bash -c "/bin/rm -rf /var/run/pigpio.pid && /pigpio/pigpiod -gl" - zerotier: - image: zyclonite/zerotier:1.6.6 - container_name: zerotier-one - user: root - restart: always - network_mode: host - devices: - - '/dev/net/tun' - cap_add: - - SYS_ADMIN - - NET_ADMIN - - CAP_SYS_RAWIO - volumes: - - volume_zerotier_config:/var/lib/zerotier-one:rw - servos: - build: - context: . - dockerfile: pi_gpio.dockerfile - privileged: true - labels: - io.balena.features.kernel-modules: '1' - user: root - restart: always - depends_on: - - "pigpiod" - network_mode: "host" - command: bash -c "modprobe i2c-dev && python3 -m ras.servos --config /config/driver.ini" - environment: - GPIOZERO_PIN_FACTORY: 'pigpio' - volumes: - - - volume_local_config:/config:rw - stream0: - build: - context: . - dockerfile: pi_gstreamer.dockerfile - privileged: true - user: root - restart: always - network_mode: "host" - command: ["python3", "-m", "stream.camera", "--port", "9101", "--config", "/config/camera0.ini"] - stop_signal: SIGKILL - volumes: - - volume_local_config:/config:rw - stream1: - build: - context: . - dockerfile: pi_gstreamer.dockerfile - privileged: true - user: root - restart: always - network_mode: "host" - command: ["python3", "-m", "stream.camera", "--port", "9102", "--config", "/config/camera1.ini"] - stop_signal: SIGKILL - volumes: - - volume_local_config:/config:rw - diff --git a/raspi/pi_gpio.dockerfile b/raspi/pi_gpio.dockerfile deleted file mode 100644 index d147166c..00000000 --- a/raspi/pi_gpio.dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM centipede2donald/raspbian-stretch:pigpio-zmq-byodr-0.25.0 - -ENV PYTHONPATH "${PYTHONPATH}:/common" - -COPY ./ app/ - -WORKDIR /app \ No newline at end of file diff --git a/raspi/pi_gstreamer.dockerfile b/raspi/pi_gstreamer.dockerfile deleted file mode 100644 index 8c79d125..00000000 --- a/raspi/pi_gstreamer.dockerfile +++ /dev/null @@ -1,9 +0,0 @@ -FROM centipede2donald/raspbian-stretch:gst-omx-rpi-0.50.2 - -ENV PYTHONPATH "${PYTHONPATH}:/common" - -COPY ./ app/ - -WORKDIR /app - -CMD ["sleep", "infinity"] \ No newline at end of file diff --git a/raspi/pytest.ini b/raspi/pytest.ini deleted file mode 100644 index b954da6f..00000000 --- a/raspi/pytest.ini +++ /dev/null @@ -1,2 +0,0 @@ -[pytest] -cache_dir = /pytest_cache \ No newline at end of file diff --git a/raspi/ras/__init__.py b/raspi/ras/__init__.py deleted file mode 100644 index e69de29b..00000000 diff --git a/raspi/ras/core.py b/raspi/ras/core.py deleted file mode 100644 index 45beb231..00000000 --- a/raspi/ras/core.py +++ /dev/null @@ -1,180 +0,0 @@ -import logging -import threading - -import pyvesc -import serial -from gpiozero import DigitalInputDevice -from pyvesc.VESC.messages import GetValues, SetDutyCycle, SetRPM - -from byodr.utils import timestamp -from byodr.utils.option import parse_option - -logger = logging.getLogger(__name__) - - -class CommandHistory(object): - """ - track the history of commands given to the robot (possibly motor commands). It checks for commands that might be 'missing' based on a certain threshold and can be reset. - """ - - def __init__(self, timeout_seconds=180, hz=25): - self._threshold = timeout_seconds * hz - self._num_missing = None - self.reset() - - def touch(self, steering, throttle, wakeup=False): - if wakeup: - self._num_missing = 0 - else: - has_steering = steering is not None and abs(steering) > 1e-3 - has_throttle = throttle is not None and abs(throttle) > 1e-3 - if not has_steering and not has_throttle: - self._num_missing += 1 - elif not self.is_missing(): - # After the missing state is reached a wakeup is required to reset. - self._num_missing = 0 - - def reset(self): - self._num_missing = self._threshold + 1 - - def is_missing(self): - return self._num_missing > self._threshold - - -class HallRps(object): - """This class seems to be related to a Hall sensor (probably for measuring rotations per second, RPS). - It monitors a specific GPIO pin (pin=16 by default) for changes, likely detecting rotations. - The sensor's detections are used to calculate rotations per second (rps). - """ - - def __init__(self, pin=16, moment=0.05, debug=False): - self._moment = moment - self._debug = debug - self._lock = threading.Lock() - self._rps = 0 - self._detect_time = 0 - self._detect_duration = 0 - self._num_detections = 0 - self._sensor = DigitalInputDevice(pin=pin, pull_up=True) - self._sensor.when_activated = self._detect - - def tick(self): - with self._lock: - # Drop to zero when stopped. - _elapsed = timestamp() - self._detect_time - if self._detect_duration > 0 and _elapsed / self._detect_duration > 1: - self._rps *= (1 - self._moment) if self._rps > 1e-2 else 0 - - def rps(self): - with self._lock: - return self._rps - - def detections(self): - with self._lock: - return self._num_detections - - def _detect(self): - with self._lock: - _now = timestamp() - self._detect_duration = _now - self._detect_time - _rps = 1e6 / self._detect_duration - self._rps = (self._moment * _rps + (1.0 - self._moment) * self._rps) if self._rps > 0 else _rps - self._detect_time = _now - if self._debug: - self._num_detections += 1 - - -class HallOdometer(object): - def __init__(self, **kwargs): - self._cm_per_revolution = parse_option("odometer.distance.cm_per_revolution", float, 15, **kwargs) - self._debug = parse_option("odometer.debug", int, 0, **kwargs) == 1 - self._alpha = parse_option("odometer.moment.alpha", float, 0.10, **kwargs) - self._enabled = parse_option("drive.type", str, **kwargs) == "gpio_with_hall" - self._hall = None - - def is_enabled(self): - return self._enabled - - def setup(self): - if self._enabled: - self._hall = HallRps(moment=self._alpha, debug=self._debug) - logger.info("Created hall odometer with cm/rev={:2.2f} alpha={:2.2f} and debug={}.".format(self._cm_per_revolution, self._alpha, self._debug)) - - def quit(self): - self._enabled = False - self._hall = None - - def velocity(self): - _velocity = self._hall.rps() * self._cm_per_revolution * 1e-2 # Convert to meters per second. - self._hall.tick() - if self._debug: - logger.info("{:2.2f} n={}".format(self._hall.rps(), self._hall.detections())) - return _velocity - - -class VESCDrive(object): - def __init__(self, serial_port="/dev/ttyACM0", rpm_drive=True, cm_per_pole_pair=1): - self._port = serial_port - self._rpm_drive = rpm_drive - self._cm_per_pp = cm_per_pole_pair - self._lock = threading.Lock() - self._ser = None - - def _close(self): - if self._ser is not None: - try: - self._ser.close() - except serial.serialutil.SerialException: - pass - self._ser = None - - def _open(self): - _good = False - try: - if self._ser is None: - self._ser = serial.Serial(self._port, baudrate=115200, timeout=0.05) - logger.info("Connected serial port {}.".format(self._port)) - _good = self._ser.isOpen() - except serial.serialutil.SerialException: - self._close() - _good = False - return _good - - def is_open(self): - with self._lock: - return self._open() - - def close(self): - with self._lock: - self._close() - - def get_velocity(self): - return (self.get_rpm() / 60.0) * self._cm_per_pp * 1e-2 # Convert to meters per second. - - def get_rpm(self): - with self._lock: - if self._open(): - try: - self._ser.write(pyvesc.encode_request(GetValues)) - if self._ser.in_waiting > 78: - (response, consumed) = pyvesc.decode(self._ser.read(79)) - return response.rpm - else: - raise AssertionError("Protocol violation on the response length.") - except serial.serialutil.SerialException: - self._close() - raise AssertionError("The serial connection is not operational.") - - def set_effort(self, value): - with self._lock: - _operational = self._open() - if _operational: - try: - if self._rpm_drive: - self._ser.write(pyvesc.encode(SetRPM(int(value * 1e3)))) - else: - self._ser.write(pyvesc.encode(SetDutyCycle(float(value * 1e-1)))) - except serial.serialutil.SerialException: - self._close() - _operational = False - return _operational diff --git a/raspi/ras/driver.template b/raspi/ras/driver.template deleted file mode 100644 index 0b2b1586..00000000 --- a/raspi/ras/driver.template +++ /dev/null @@ -1,11 +0,0 @@ -[driver] -throttle.domain.forward.shift = 4.20 -throttle.domain.backward.shift = -2 -throttle.reverse.gear = -6 -drive.0.serial.port = /dev/ttyACM0 -drive.1.serial.port = /dev/ttyACM1 - -# We are using only vesc_dual -drive.type = vesc_dual -#drive.type = vesc_single -#drive.type = gpio_with_hall \ No newline at end of file diff --git a/raspi/ras/servos.py b/raspi/ras/servos.py deleted file mode 100644 index e638bfb3..00000000 --- a/raspi/ras/servos.py +++ /dev/null @@ -1,549 +0,0 @@ -from __future__ import absolute_import - -import argparse -import collections -import copy -import logging -import multiprocessing -import os -import shutil -import signal -import subprocess -import time -from abc import ABC, abstractmethod -from configparser import ConfigParser - -import numpy as np -from byodr.utils import Application, timestamp -from byodr.utils.ipc import JSONPublisher, JSONServerThread -from byodr.utils.option import parse_option -from byodr.utils.protocol import MessageStreamProtocol -from byodr.utils.usbrelay import SearchUsbRelayFactory, StaticRelayHolder -from gpiozero import AngularServo # interfacing with the GPIO pins of the Raspberry Pi - -from .core import CommandHistory, HallOdometer, VESCDrive - -logger = logging.getLogger(__name__) -log_format = "%(levelname)s: %(asctime)s %(filename)s %(funcName)s %(message)s" - -signal.signal(signal.SIGINT, lambda sig, frame: _interrupt()) -signal.signal(signal.SIGTERM, lambda sig, frame: _interrupt()) - -quit_event = multiprocessing.Event() - - -def _interrupt(): - logger.info("Received interrupt, quitting.") - quit_event.set() - - -class AbstractDriver(ABC): - def __init__(self, relay): - self._relay = relay - - @staticmethod - def configuration_check(config): - version_ok = config is not None and config.get("app_version", -1) == 2 - if config is not None and not version_ok: - logger.warning("Received incompatible application version - configuration aborted.") - return version_ok - - @abstractmethod - def has_sensors(self): - raise NotImplementedError() - - @abstractmethod - def relay_ok(self): - raise NotImplementedError() - - @abstractmethod - def relay_violated(self, on_integrity=True): - raise NotImplementedError() - - @abstractmethod - def set_configuration(self, config): - raise NotImplementedError() - - @abstractmethod - def is_configured(self): - raise NotImplementedError() - - @abstractmethod - def velocity(self): - raise NotImplementedError() - - @abstractmethod - def drive(self, steering, throttle): - raise NotImplementedError() - - @abstractmethod - def quit(self): - raise NotImplementedError() - - -class NoopDriver(AbstractDriver): - def __init__(self, relay): - super().__init__(relay) - self._relay.close() - - def has_sensors(self): - return False - - def relay_ok(self): - pass - - def relay_violated(self, on_integrity=True): - pass - - def set_configuration(self, config): - pass - - def is_configured(self): - return True - - def velocity(self): - return 0 - - def drive(self, steering, throttle): - return 0 - - def quit(self): - pass - - -class AbstractSteerServoDriver(AbstractDriver, ABC): - def __init__(self, relay, **kwargs): - super().__init__(relay) - self._steer_servo_config = dict( - pin=parse_option("servo.steering.pin.nr", int, 13, **kwargs), - min_pw=parse_option("servo.steering.min_pulse_width.ms", float, 0.5, **kwargs), - max_pw=parse_option("servo.steering.max_pulse_width.ms", float, 2.5, **kwargs), - frame=parse_option("servo.steering.frame_width.ms", float, 20.0, **kwargs), - ) - self._steering_config = dict(scale=parse_option("steering.domain.scale", float, 1.0, **kwargs)) - self._steer_servo = None - self._last_config = None - - @staticmethod - def _angular_servo(message): - fields = ("pin", "min_pw", "max_pw", "frame") - m_config = [message.get(f) for f in fields] # Correctly use 'f' as the argument for 'get' - pin, min_pw, max_pw, frame = [m_config[0]] + [1e-3 * x for x in m_config[1:]] - return AngularServo(pin=pin, min_pulse_width=min_pw, max_pulse_width=max_pw, frame_width=frame) - - def _create_servo(self, servo, name, message): - logger.info("Creating servo {} with config {}".format(name, message)) - if servo is not None: - servo.close() - servo = self._angular_servo(message=message) - return servo - - def _apply_steering(self, steering): - if self._steer_servo is not None: - scale = self._steering_config.get("scale") - self._steer_servo.angle = scale * 90.0 * min(1, max(-1, steering)) - - def set_configuration(self, config): - # Translate the values into our domain. - _steer_servo_config = self._steer_servo_config - _steer_servo_config["min_pw"] = 0.5 + 0.5 * max(-2, min(2, config.get("steering_offset"))) - if _steer_servo_config != self._last_config: - self._steer_servo = self._create_servo(self._steer_servo, "steering", _steer_servo_config) - self._last_config = copy.deepcopy(_steer_servo_config) - - def is_configured(self): - return self._steer_servo is not None - - def quit(self): - if self._steer_servo is not None: - self._steer_servo.close() - - -class GPIODriver(AbstractSteerServoDriver): - def __init__(self, relay, **kwargs): - super().__init__(relay) - # Our relay is expected to be wired on the motor power line. - self._relay.open() - self._motor_servo_config = dict( - pin=parse_option("servo.motor.pin.nr", int, 12, **kwargs), - min_pw=parse_option("servo.motor.min_pulse_width.ms", float, 0.5, **kwargs), - max_pw=parse_option("servo.motor.max_pulse_width.ms", float, 2.5, **kwargs), - frame=parse_option("servo.motor.frame_width.ms", float, 20.0, **kwargs), - ) - self._throttle_config = dict( - reverse=parse_option("throttle.reverse.gear", int, 0.0, **kwargs), - forward_shift=parse_option("throttle.domain.forward.shift", float, 0.0, **kwargs), - backward_shift=parse_option("throttle.domain.backward.shift", float, 0.0, **kwargs), - scale=parse_option("throttle.domain.scale", float, 2.0, **kwargs), - ) - self._motor_servo = None - - def has_sensors(self): - return False - - def relay_ok(self): - self._relay.close() - - def relay_violated(self, on_integrity=True): - self._relay.open() - - def set_configuration(self, config): - if self.configuration_check(config): - logger.info("Received configuration {}.".format(config)) - super().set_configuration(config) - self._throttle_config["scale"] = max(0, config.get("motor_scale")) - self._motor_servo = self._create_servo(self._motor_servo, "motor", self._motor_servo_config) - - def is_configured(self): - return super().is_configured() and self._motor_servo is not None - - def velocity(self): - raise NotImplementedError() - - def drive(self, steering, throttle): - self._apply_steering(steering) - _motor_effort = 0 - if self._motor_servo is not None: - config = self._throttle_config - _motor_effort = config.get("scale") * throttle - _motor_shift = config.get("forward_shift") if throttle > 0 else config.get("backward_shift") - _motor_angle = min(90, max(-90, _motor_shift + _motor_effort)) - _reverse_boost = config.get("reverse") - if throttle < -0.990 and _reverse_boost < _motor_angle: - _motor_effort = _reverse_boost / 90.0 - _motor_angle = _reverse_boost - self._motor_servo.angle = _motor_angle - return _motor_effort - - def quit(self): - self._relay.open() - super().quit() - if self._motor_servo is not None: - self._motor_servo.close() - - -class SingularVescDriver(AbstractSteerServoDriver): - def __init__(self, relay, **kwargs): - super().__init__(relay) - self._relay.open() - # Attempt to suppress noise. - self._velocity = 0.0 - self._velocity_alpha = 0.5 - self._drive = VESCDrive(serial_port=parse_option("drive.serial.port", str, "/dev/ttyACM0", **kwargs), cm_per_pole_pair=parse_option("drive.distance.cm_per_pole_pair", float, 0.88, **kwargs)) - self._throttle_config = dict(scale=parse_option("throttle.domain.scale", float, 2.0, **kwargs)) - - def has_sensors(self): - return self.is_configured() - - def relay_ok(self): - self._relay.close() - - def relay_violated(self, on_integrity=True): - if on_integrity: - self._relay.open() - - def set_configuration(self, config): - if self.configuration_check(config): - logger.info("Received configuration {}.".format(config)) - self._throttle_config["scale"] = max(0, config.get("motor_scale")) - if self._drive.is_open(): - super().set_configuration(config) - - def is_configured(self): - # This method is polled repeatedly. - # First check if the drive is open with the side effect that opening is attempted when not in operation. - return self._drive.is_open() and super().is_configured() - - def velocity(self): - try: - self._velocity = self._velocity_alpha * self._velocity + (1 - self._velocity_alpha) * self._drive.get_velocity() - self._velocity = self._velocity if abs(self._velocity) > 5e-2 else 0 - return self._velocity - except Exception as e: - logger.warning(e) - return 0 - - def drive(self, steering, throttle): - _motor_effort = self._throttle_config.get("scale") * throttle - _operational = self._drive.set_effort(_motor_effort) - if _operational: - self._apply_steering(steering) - return _motor_effort - - def quit(self): - self._relay.open() - super().quit() - self._drive.close() - - -class DualVescDriver(AbstractDriver): - def __init__(self, relay, config_file_dir, **kwargs): - super().__init__(relay) - self._relay.close() - self._config_file_dir = config_file_dir - self._config_file_path = os.path.join(self._config_file_dir, "drive_config.ini") - self._previous_motor_alternate = None - self._pp_cm = parse_option("drive.distance.cm_per_pole_pair", float, 2.3, **kwargs) - # Initialize with default configuration - self.update_drive_instances(kwargs) - self.last_config = None # Attribute to store the last configuration - - self._steering_offset = 0 - self._steering_effect = max(0.0, float(kwargs.get("drive.steering.effect", 1.8))) - self._throttle_config = dict(scale=parse_option("throttle.domain.scale", float, 2.0, **kwargs)) - self._axes_ordered = kwargs.get("drive.axes.mount.order", "normal") == "normal" - self._axis0_multiplier = 1 if kwargs.get("drive.axis0.mount.direction", "forward") == "forward" else -1 - self._axis1_multiplier = 1 if kwargs.get("drive.axis1.mount.direction", "forward") == "forward" else -1 - - def update_drive_instances(self, config): - parser = ConfigParser() - parser.read(self._config_file_path) - - motor_alternate = config.get("motor_alternate", False) - if motor_alternate != self._previous_motor_alternate: - self._previous_motor_alternate = motor_alternate - # Log and swap the ports - if motor_alternate: - port0, port1 = "/dev/ttyACM1", "/dev/ttyACM0" - else: - port0, port1 = "/dev/ttyACM0", "/dev/ttyACM1" - - # Update instances with new serial ports - # Right wheel - self._drive1 = VESCDrive(serial_port=port0, rpm_drive=False, cm_per_pole_pair=self._pp_cm) - # Left wheel - self._drive2 = VESCDrive(serial_port=port1, rpm_drive=False, cm_per_pole_pair=self._pp_cm) - - logger.info("Updated wheel port mapping: drive1={}, drive2={}".format(port0, port1)) - - def set_configuration(self, config): - if config != self.last_config: - if self.configuration_check(config): - self._steering_offset = max(-1.0, min(1.0, config.get("steering_offset"))) - self._throttle_config["scale"] = max(0, config.get("motor_scale")) - self.update_drive_instances(config) - logger.info("Received new configuration {}.".format(config)) - self.last_config = config # Update last configuration - - def has_sensors(self): - return self.is_configured() - - def relay_ok(self): - self._relay.close() - - def relay_violated(self, on_integrity=True): - if on_integrity: - self._relay.open() - - def is_configured(self): - return self._drive1.is_open() and self._drive2.is_open() - - def velocity(self): - try: - return (self._drive1.get_velocity() + self._drive2.get_velocity()) / 2.0 - except Exception as e: - logger.warning(e) - return 0 - - def drive(self, steering, throttle): - _motor_scale = self._throttle_config.get("scale") - # Scale down throttle for one wheel, the other retains its value. - steering = min(1.0, max(-1.0, steering + self._steering_offset)) - throttle = min(1.0, max(-1.0, throttle)) - effect = 1 - min(1.0, abs(steering) * self._steering_effect) - left = throttle if steering >= 0 else throttle * effect - right = throttle if steering < 0 else throttle * effect - a = (right if self._axes_ordered else left) * self._axis0_multiplier * _motor_scale - b = (left if self._axes_ordered else right) * self._axis1_multiplier * _motor_scale - self._drive1.set_effort(a) - self._drive2.set_effort(b) - return np.mean([a, b]) - - def quit(self): - self._relay.open() - self._drive1.close() - self._drive2.close() - - -class MainApplication(Application): - def __init__(self, event, config_file, relay, hz, test_mode=False): - super(MainApplication, self).__init__(run_hz=hz, quit_event=event) - self.test_mode = test_mode - self.config_file_dir = config_file - self.relay = relay - self._integrity = MessageStreamProtocol(max_age_ms=100, max_delay_ms=100) - self._cmd_history = CommandHistory(hz=hz) - self._config_queue = collections.deque(maxlen=1) - self._drive_queue = collections.deque(maxlen=4) - self._last_drive_command = None # Store the last drive command - - self._chassis = None - self.platform = None - self.publisher = None - self._odometer = None # Initialize here but set up later - - def setup_components(self): - # Check configuration files and update, read configuration, then setup odometer and chassis - self.check_configuration_files() - kwargs = self.read_configuration() - self._odometer = HallOdometer(**kwargs) - - drive_type = kwargs.get("drive.type", "unknown") - if drive_type in ["gpio", "gpio_with_hall"]: - self._chassis = GPIODriver(self.relay, **kwargs) - elif drive_type == "vesc_single": - self._chassis = SingularVescDriver(self.relay, **kwargs) - elif drive_type == "vesc_dual": - self._chassis = DualVescDriver(self.relay, self.config_file_dir, **kwargs) - else: - raise AssertionError("Unknown drive type '{}'.".format(drive_type)) - - self.platform.add_listener(self._on_message) - self._integrity.reset() - self._cmd_history.reset() - self._odometer.setup() - - def check_configuration_files(self): - """Checks if the configuration file exists, if not, creates it from the template.""" - config_file = "driver.ini" - template_file_path = "ras/driver.template" - - if not os.path.exists(self.config_file_dir): - shutil.copyfile(template_file_path, self.config_file_dir) - logger.info("Created {} from template at {}".format(config_file, self.config_file_dir)) - - self._verify_and_add_missing_keys(self.config_file_dir, template_file_path) - - def read_configuration(self): - parser = ConfigParser() - parser.read(self.config_file_dir) - kwargs = {} - if parser.has_section("driver"): - kwargs.update(dict(parser.items("driver"))) - if parser.has_section("odometer"): - kwargs.update(dict(parser.items("odometer"))) - return kwargs - - def _verify_and_add_missing_keys(self, ini_file, template_file): - config = ConfigParser() - template_config = ConfigParser() - - config.read(ini_file) - template_config.read(template_file) - - # Loop through each section and key in the template - for section in template_config.sections(): - if not config.has_section(section): - config.add_section(section) - for key, value in template_config.items(section): - if not config.has_option(section, key): - config.set(section, key, value) - logger.info("Added missing key '{}' in section '[{}]' to {}".format(key, section, ini_file)) - - # Save changes to the ini file if any modifications have been made - with open(ini_file, "w") as configfile: - config.write(configfile) - - def _pop_config(self): - return self._config_queue.popleft() if bool(self._config_queue) else None - - def _pop_drive(self): - """Get one command from one end of the drive queue""" - # This case happens when booting for the first time and PIL didn't send anything yet - if not self._drive_queue: - if self._last_drive_command: - # If the queue is empty and there's a last known command, return it instead of None - return self._last_drive_command - else: - # A default command if no commands have ever been received - return {"steering": 0.0, "throttle": 0.0, "reverse": 0, "wakeup": 0} - else: - # Pop the command from the queue and update the last known command - self._last_drive_command = self._drive_queue.popleft() - return self._last_drive_command - - def _on_message(self, message): - self._integrity.on_message(message.get("time")) - if message.get("method") == "ras/driver/config": - self._config_queue.appendleft(message.get("data")) - else: - self._drive_queue.appendleft(message.get("data")) - - def finish(self): - self._chassis.quit() - self._odometer.quit() - - def step(self): - n_violations = self._integrity.check() - if n_violations > 5: - self._chassis.relay_violated(on_integrity=True) - self._integrity.reset() - return - - c_config, c_drive = self._pop_config(), self._pop_drive() - # print(c_drive) - self._chassis.set_configuration(c_config) - - v_steering = 0 if c_drive is None else c_drive.get("steering", 0) - v_throttle = 0 if c_drive is None else c_drive.get("throttle", 0) - v_wakeup = False if c_drive is None else bool(c_drive.get("wakeup")) - - self._cmd_history.touch(steering=v_steering, throttle=v_throttle, wakeup=v_wakeup) - if self._cmd_history.is_missing(): - self._chassis.relay_violated(on_integrity=False) - elif n_violations < -5: - self._chassis.relay_ok() - - # Immediately zero out throttle when violations start occurring. - v_throttle = 0 if n_violations > 0 else v_throttle - _effort = self._chassis.drive(v_steering, v_throttle) - _data = dict(time=timestamp(), configured=int(self._chassis.is_configured()), motor_effort=_effort) - if self._chassis.has_sensors(): - _data.update(dict(velocity=self._chassis.velocity())) - elif self._odometer.is_enabled(): - _data.update(dict(velocity=self._odometer.velocity())) - - # Let the communication partner know we are operational. - self.publisher.publish(data=_data) - - -def main(): - parser = argparse.ArgumentParser(description="Steering and throttle driver.") - parser.add_argument("--config", type=str, default="/config/driver.ini", help="Configuration file.") - args = parser.parse_args() - - config_file = args.config - - _relay = SearchUsbRelayFactory().get_relay() - ras_dynamic_ip = subprocess.check_output("hostname -I | awk '{for (i=1; i<=NF; i++) if ($i ~ /^192\\.168\\./) print $i}'", shell=True).decode().strip().split()[0] - assert _relay.is_attached(), "The relay device is not attached." - - holder = StaticRelayHolder(relay=_relay) - try: - - application = MainApplication(quit_event, config_file, relay=holder, hz=50) - - application.publisher = JSONPublisher(url="tcp://{}:5555".format(ras_dynamic_ip), topic="ras/drive/status") - application.platform = JSONServerThread(url="tcp://{}:5550".format(ras_dynamic_ip), event=quit_event, receive_timeout_ms=50) - application.setup_components() # Set up components including reading the config - - threads = [application.platform] - if quit_event.is_set(): - return 0 - - [t.start() for t in threads] - application.run() - - logger.info("Waiting on threads to stop.") - [t.join() for t in threads] - finally: - holder.open() - - while not quit_event.is_set(): - time.sleep(1) - - -if __name__ == "__main__": - logging.basicConfig(format=log_format, datefmt="%Y%m%d:%H:%M:%S %p %Z") - logging.getLogger().setLevel(logging.INFO) - main() diff --git a/raspi/stream/__init__.py b/raspi/stream/__init__.py deleted file mode 100644 index e69de29b..00000000 diff --git a/raspi/stream/camera.py b/raspi/stream/camera.py deleted file mode 100644 index 005c15a9..00000000 --- a/raspi/stream/camera.py +++ /dev/null @@ -1,192 +0,0 @@ -#!/usr/bin/env python -from __future__ import absolute_import - -import argparse -import asyncio -import logging -import multiprocessing -import os -import shutil -import signal -import threading -import time -import subprocess -import re -import glob - -from configparser import ConfigParser as SafeConfigParser - -from tornado import web, ioloop -from tornado.platform.asyncio import AnyThreadEventLoopPolicy - -from byodr.utils import Application -from byodr.utils.option import parse_option -from byodr.utils.video import create_video_source -from byodr.utils.websocket import HttpLivePlayerVideoSocket, JMuxerVideoStreamSocket - -logger = logging.getLogger(__name__) - -log_format = "%(levelname)s: %(asctime)s %(filename)s %(funcName)s %(message)s" - -signal.signal(signal.SIGINT, lambda sig, frame: _interrupt()) -signal.signal(signal.SIGTERM, lambda sig, frame: _interrupt()) - -quit_event = multiprocessing.Event() - - -def _interrupt(): - logger.info("Received interrupt, quitting.") - quit_event.set() - - -class CameraApplication(Application): - def __init__(self, stream, event): - super(CameraApplication, self).__init__(quit_event=event, run_hz=2) - self._stream = stream - - def setup(self): - pass - - def step(self): - self._stream.check() - - -gst_commands = { - "h264/rtsp": "rtspsrc location=rtsp://{user}:{password}@{ip}:{port}{path} latency=0 drop-on-latency=true do-retransmission=false ! " - 'queue ! rtph264depay ! h264parse ! queue ! video/x-h264,stream-format="byte-stream" ! queue ! appsink', - "raw/usb/h264/udp": "v4l2src device={uri} ! video/x-raw,width={src_width},height={src_height} ! videoflip method={video_flip} ! tee name=t " - "t. ! queue ! videoconvert ! videoscale ! video/x-raw,width={udp_width},height={udp_height} ! queue ! " - "omxh264enc target-bitrate={udp_bitrate} control-rate=1 interval-intraframes=50 ! queue ! " - "video/x-h264, profile=baseline ! rtph264pay ! udpsink host={udp_host} port={udp_port} sync=false async=false " - "t. ! queue ! videoconvert ! videoscale ! video/x-raw,width={out_width},height={out_height} ! queue ! " - "omxh264enc target-bitrate={out_bitrate} control-rate=1 interval-intraframes=50 ! queue ! " - 'video/x-h264,profile=baseline,stream-format="byte-stream" ! queue ! appsink', -} - - -def change_segment_config(config_dir): - """Change the ips in all the config files the segment is using them. - It will count on the ip of the pi""" - # Get the local IP address's third octet - ip_address = subprocess.check_output("hostname -I | awk '{for (i=1; i<=NF; i++) if ($i ~ /^192\\.168\\./) print $i}'", shell=True).decode().strip().split()[0] - third_octet_new = ip_address.split(".")[2] - - # Regular expression to match IP addresses - ip_regex = re.compile(r"(\d+\.\d+\.)(\d+)(\.\d+)") - - with open(config_dir, "r") as f: - content = f.readlines() - - updated_content = [] - changes_made = [] - changes_made_in_file = False # Flag to track changes in the current file - - for line in content: - match = ip_regex.search(line) - if match: - third_octet_old = match.group(2) - if third_octet_old != third_octet_new: - # Replace the third octet - new_line = ip_regex.sub(r"\g<1>" + third_octet_new + r"\g<3>", line) - updated_content.append(new_line) - changes_made.append((third_octet_old, third_octet_new)) - changes_made_in_file = True - - continue - updated_content.append(line) - - # Write changes back to the file - with open(config_dir, "w") as f: - f.writelines(updated_content) - - # Print changes made - if changes_made_in_file: - logger.info("Updated {} with a new ip address of {}".format(config_dir, third_octet_new)) - - -def create_stream(config_file): - change_segment_config(config_file) - parser = SafeConfigParser() - parser.read(config_file) - kwargs = dict(parser.items("camera")) - name = os.path.basename(os.path.splitext(config_file)[0]) - _type = parse_option("camera.type", str, **kwargs) - assert _type in list(gst_commands.keys()), "Unrecognized camera type '{}'.".format(_type) - if _type == "h264/rtsp": - out_width, out_height = [int(x) for x in parse_option("camera.output.shape", str, "640x480", **kwargs).split("x")] - config = { - "ip": (parse_option("camera.ip", str, "192.168.1.64", **kwargs)), - "port": (parse_option("camera.port", int, 554, **kwargs)), - "user": (parse_option("camera.user", str, "user1", **kwargs)), - "password": (parse_option("camera.password", str, "HaikuPlot876", **kwargs)), - "path": (parse_option("camera.path", str, "/Streaming/Channels/103", **kwargs)), - } - else: - _type = "raw/usb/h264/udp" - src_width, src_height = [int(x) for x in parse_option("camera.source.shape", str, "640x480", **kwargs).split("x")] - udp_width, udp_height = [int(x) for x in parse_option("camera.udp.shape", str, "320x240", **kwargs).split("x")] - out_width, out_height = [int(x) for x in parse_option("camera.output.shape", str, "480x320", **kwargs).split("x")] - config = { - "uri": (parse_option("camera.uri", str, "/dev/video0", **kwargs)), - "src_width": src_width, - "src_height": src_height, - "video_flip": (parse_option("camera.flip.method", str, "none", **kwargs)), - "udp_width": udp_width, - "udp_height": udp_height, - "udp_bitrate": (parse_option("camera.udp.bitrate", int, 1024000, **kwargs)), - "udp_host": (parse_option("camera.udp.host", str, "192.168.1.100", **kwargs)), - "udp_port": (parse_option("camera.udp.port", int, 5000, **kwargs)), - "out_width": out_width, - "out_height": out_height, - "out_bitrate": (parse_option("camera.output.bitrate", int, 1024000, **kwargs)), - } - _command = gst_commands.get(_type).format(**config) - _socket_ref = parse_option("camera.output.class", str, "http-live", **kwargs) - location = f"rtsp://{config['user']}:{config['password']}@{config['ip']}:{config['port']}{config['path']}" - logger.info("Socket '{}' location={}".format(name, location)) - # logger.info("Socket '{}' ref '{}' gst command={}".format(name, _socket_ref, _command)) - return (create_video_source(name, shape=(out_height, out_width, 3), command=_command), _socket_ref) - - -def main(): - parser = argparse.ArgumentParser(description="Camera web-socket server.") - parser.add_argument("--config", type=str, default="/config/stream.ini", help="Configuration file.") - parser.add_argument("--port", type=int, default=9101, help="Socket port.") - args = parser.parse_args() - - config_file = args.config - if os.path.exists(config_file) and os.path.isfile(config_file): - video_stream, socket_type = create_stream(config_file) - application = CameraApplication(stream=video_stream, event=quit_event) - - threads = [threading.Thread(target=application.run)] - if quit_event.is_set(): - return 0 - - [t.start() for t in threads] - - asyncio.set_event_loop_policy(AnyThreadEventLoopPolicy()) - asyncio.set_event_loop(asyncio.new_event_loop()) - - io_loop = ioloop.IOLoop.instance() - class_ref = HttpLivePlayerVideoSocket if socket_type == "http-live" else JMuxerVideoStreamSocket - web_app = web.Application([(r"/", class_ref, dict(video_source=video_stream, io_loop=io_loop))]) - rear_server = web.HTTPServer(web_app, xheaders=True) - rear_server.bind(args.port) - rear_server.start() - logger.info("Web service started on port {}.".format(args.port)) - io_loop.start() - - logger.info("Waiting on threads to stop.") - [t.join() for t in threads] - else: - shutil.copyfile("/app/stream/camera.template", config_file) - logger.info("Created a new camera configuration file from template.") - while not quit_event.is_set(): - time.sleep(1) - - -if __name__ == "__main__": - logging.basicConfig(format=log_format, datefmt="%Y%m%d:%H:%M:%S %p %Z") - logging.getLogger().setLevel(logging.INFO) - main() diff --git a/raspi/stream/camera.template b/raspi/stream/camera.template deleted file mode 100644 index 3a1452cb..00000000 --- a/raspi/stream/camera.template +++ /dev/null @@ -1,6 +0,0 @@ -[camera] -camera.type = h264/rtsp -camera.ip = 192.168.1.64 - -#camera.type = raw/usb/h264/udp -#camera.uri = /dev/video0 diff --git a/raspi/tests.py b/raspi/tests.py deleted file mode 100644 index 57c675f3..00000000 --- a/raspi/tests.py +++ /dev/null @@ -1,163 +0,0 @@ -from __future__ import absolute_import -from byodr.utils import timestamp -from byodr.utils.testing import CollectPublisher -from six.moves import map -from six.moves import range - -from raspi.ras.servos import MainApplication - - -class MyRelay(object): - def __init__(self): - self._open = True - - def is_open(self): - return self._open - - def open(self): - self._open = True - - def close(self): - self._open = False - - -class MyPlatform(object): - def __init__(self): - self._listeners = [] - - def add_listener(self, c): - self._listeners.append(c) - - def send(self, message): - list(map(lambda x: x(message), self._listeners)) - - -def test_relay_cycle(): - relay = MyRelay() - publisher = CollectPublisher(topic="test/status") - platform = MyPlatform() - - application = MainApplication(relay=relay) - application.platform = platform - application.publisher = publisher - application.setup() - - # Without reliable communication the relay is open. - application.step() - assert relay.is_open() - publisher.clear() - - # A non-zero command. - command = dict(steering=0.1, throttle=0.2, reverse=0) - - # Send the first commands to do valid communication. - # The integrity protocol does not assume valid by default. - list(map(lambda _: (platform.send(dict(time=timestamp(), method="ras/servo/drive", data=command)), application.step()), list(range(10)))) - assert relay.is_open() - publisher.clear() - - # Send wakeup to close the relay after startup. - platform.send(dict(time=timestamp(), method="ras/servo/drive", data=dict(wakeup=1))) - application.step() - assert not relay.is_open() - publisher.clear() - - # Simulate communication violations. - list(map(lambda i: (platform.send(dict(time=timestamp() + i * 1e6, method="ras/servo/drive", data=command)), application.step()), list(range(10)))) - assert relay.is_open() - publisher.clear() - - # And resume. - list(map(lambda _: (platform.send(dict(time=timestamp(), method="ras/servo/drive", data=command)), application.step()), list(range(10)))) - assert not relay.is_open() - publisher.clear() - - # Pretend missing commands but valid communication. - _null_command = dict(steering=0, throttle=0, reverse=0) - list(map(lambda _: (platform.send(dict(time=timestamp(), method="ras/servo/drive", data=_null_command)), application.step()), list(range(5000)))) - assert relay.is_open() - publisher.clear() - - # The communication requirements must still be met to let the other side know we are operational. - platform.send(dict(time=timestamp(), method="ras/servo/drive", data=_null_command)) - application.step() - assert len(publisher.collect()) > 0 - publisher.clear() - - # Wakeup again. - platform.send(dict(time=timestamp(), method="ras/servo/drive", data=dict(wakeup=1))) - application.step() - assert not relay.is_open() - application.finish() - assert relay.is_open() - - -def test_relay_wakeup_reset(): - relay = MyRelay() - publisher = CollectPublisher(topic="test/status") - platform = MyPlatform() - - # With hz=1 the command history threshold is 180. - application = MainApplication(relay=relay, hz=1) - application.platform = platform - application.publisher = publisher - application.setup() - - # Send the first commands to do valid communication. - command = dict(steering=0.1, throttle=0.2, reverse=0) - list(map(lambda _: (platform.send(dict(time=timestamp(), method="ras/servo/drive", data=command)), application.step()), list(range(10)))) - platform.send(dict(time=timestamp(), method="ras/servo/drive", data=dict(wakeup=1))) - application.step() - assert not relay.is_open() - publisher.clear() - - # Send the zero commands and wakeup. - zero = dict(steering=0, throttle=0, reverse=0) - list(map(lambda _: (platform.send(dict(time=timestamp(), method="ras/servo/drive", data=zero)), application.step()), list(range(180)))) - platform.send(dict(time=timestamp(), method="ras/servo/drive", data=dict(wakeup=1))) - application.step() - assert not relay.is_open() - publisher.clear() - - # After wakeup the counters need to have been reset in order not to revert immediately. - list(map(lambda _: (platform.send(dict(time=timestamp(), method="ras/servo/drive", data=zero)), application.step()), list(range(10)))) - assert not relay.is_open() - publisher.clear() - - application.finish() - - -def test_command_history_reset(): - relay = MyRelay() - publisher = CollectPublisher(topic="test/status") - platform = MyPlatform() - - # With hz=1 the command history threshold is 180. - application = MainApplication(relay=relay, hz=1) - application.platform = platform - application.publisher = publisher - application.setup() - - # Send the first commands to do valid communication. - command = dict(steering=0.1, throttle=0.2, reverse=0) - list(map(lambda _: (platform.send(dict(time=timestamp(), method="ras/servo/drive", data=command)), application.step()), list(range(10)))) - platform.send(dict(time=timestamp(), method="ras/servo/drive", data=dict(wakeup=1))) - application.step() - assert not relay.is_open() - publisher.clear() - - # Send zero commands until just below the threshold. - zero = dict(steering=0, throttle=0, reverse=0) - list(map(lambda _: (platform.send(dict(time=timestamp(), method="ras/servo/drive", data=zero)), application.step()), list(range(180)))) - assert not relay.is_open() - publisher.clear() - - # And then a good command and some zero commands again. - # The relay should remain open. - platform.send(dict(time=timestamp(), method="ras/servo/drive", data=command)) - application.step() - list(map(lambda _: (platform.send(dict(time=timestamp(), method="ras/servo/drive", data=zero)), application.step()), list(range(10)))) - assert not relay.is_open() - publisher.clear() - - application.finish() diff --git a/rosnode/Dockerfile b/rosnode/Dockerfile deleted file mode 100644 index b5723962..00000000 --- a/rosnode/Dockerfile +++ /dev/null @@ -1,17 +0,0 @@ -FROM ros:foxy - -RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 4B63CF8FDE49746E98FA01DDAD19BAB3CBF125EA - -# Proceed with the rest of your setup -RUN apt-get update && apt-get install -y --no-install-recommends \ - python3-zmq \ - nano \ - wget - -COPY ./common common/ -COPY ./rosnode app/ -WORKDIR /app - -ENV PYTHONPATH "${PYTHONPATH}:/common" - -CMD ["python3", "app.py"] diff --git a/rosnode/app.py b/rosnode/app.py deleted file mode 100644 index 5e52a82b..00000000 --- a/rosnode/app.py +++ /dev/null @@ -1,131 +0,0 @@ -import argparse -import glob -import logging -import os -from configparser import ConfigParser as SafeConfigParser - -import rclpy -from actionlib_msgs.msg import GoalID -from diagnostic_msgs.msg import DiagnosticArray, DiagnosticStatus, KeyValue -from rclpy.node import Node -from std_msgs.msg import Float32 - -from byodr.utils import Application, timestamp -from byodr.utils.ipc import json_collector, JSONPublisher - - -class Bridge(Node): - def __init__(self, node_name, get_pilot_message, publish_internal): - super().__init__("byodr1" if node_name is None else node_name) - self._get_pilot = get_pilot_message - self._publish_internal = publish_internal - self.create_subscription(GoalID, "{}/v10/pilot/set_mode".format(self.get_name()), self._receive_mode, 10) - self.create_subscription(Float32, "{}/v10/pilot/set_maximum_speed".format(self.get_name()), self._receive_max_speed, 10) - self._ros_publisher = self.create_publisher(DiagnosticArray, "{}/v10/pilot/diagnostics".format(self.get_name()), 10) - # The timer period is specified in seconds. - self.timer = self.create_timer(1.0 / 10, self._publish_diagnostics) - - def _publish_diagnostics(self): - pilot = self._get_pilot() - if pilot: - status = DiagnosticStatus(name="mode", message="not_available") - status.values = [KeyValue(key="steer", value="{:+2.5f}".format(pilot.get("steering"))), KeyValue(key="throttle", value="{:+2.5f}".format(pilot.get("throttle")))] - driver_mode = pilot.get("driver") - if driver_mode == "driver_mode.teleop.direct": - status.message = "teleoperation" - elif driver_mode == "driver_mode.inference.dnn": - status.message = "autopilot" - status.values += [KeyValue(key="maximum_speed", value="{:+2.5f}".format(pilot.get("cruise_speed"))), KeyValue(key="desired_speed", value="{:+2.5f}".format(pilot.get("desired_speed")))] - diagnostics = DiagnosticArray() - diagnostics.header.stamp = self.get_clock().now().to_msg() - diagnostics.status = [status] - self._ros_publisher.publish(diagnostics) - - def _send_internal_command(self, cmd): - if cmd: - cmd["time"] = timestamp() - self._publish_internal(cmd) - - def _receive_mode(self, msg): - # self.get_logger().info('I heard: "%s"' % msg.data) - internal = dict() - if msg.id == "teleoperation": - internal["pilot.driver.set"] = "driver_mode.teleop.direct" - elif msg.id == "autopilot": - internal["pilot.driver.set"] = "driver_mode.inference.dnn" - self._send_internal_command(internal) - - def _receive_max_speed(self, msg): - internal = dict() - # The value may equal 0 (zero) and bool(0) evaluates to False. - if msg.data is not None: - internal["pilot.maximum.speed"] = msg.data - self._send_internal_command(internal) - - -class RosApplication(Application): - def __init__(self, config_dir=os.getcwd()): - # Just check for restarts - no need for high frequency processing. - super(RosApplication, self).__init__(run_hz=1) - self._config_dir = config_dir - self._bridge = None - self.pilot = None - self.publish = None - self.ipc_chatter = None - - def _config(self): - parser = SafeConfigParser() - [parser.read(_f) for _f in glob.glob(os.path.join(self._config_dir, "*.ini"))] - return dict(parser.items("ros2")) if parser.has_section("ros2") else {} - - def setup(self): - if self.active(): - if self._bridge is not None: - self._bridge.destroy_node() - bridge = Bridge(node_name=self._config().get("rover.node.name", "rover1"), get_pilot_message=self.pilot, publish_internal=self.publish) - rclpy.spin(bridge) - self._bridge = bridge - - def finish(self): - if self._bridge is not None: - self._bridge.destroy_node() - - def step(self): - chat = self.ipc_chatter() - if chat and chat.get("command") == "restart": - self.setup() - - -def main(): - parser = argparse.ArgumentParser(description="ROS2 rover node.") - parser.add_argument("--name", type=str, default="none", help="Process name.") - parser.add_argument("--config", type=str, default="/config", help="Config directory path.") - args = parser.parse_args() - - application = RosApplication(config_dir=args.config) - quit_event = application.quit_event - logger = application.logger - - publisher = JSONPublisher(url="ipc:///byodr/ros.sock", topic="aav/ros/input") - pilot = json_collector(url="ipc:///byodr/pilot.sock", topic=b"aav/pilot/output", event=quit_event) - ipc_chatter = json_collector(url="ipc:///byodr/teleop_c.sock", topic=b"aav/teleop/chatter", pop=True, event=quit_event) - application.publish = lambda m: publisher.publish(m) - application.pilot = lambda: pilot.get() - application.ipc_chatter = lambda: ipc_chatter.get() - threads = [pilot, ipc_chatter] - if quit_event.is_set(): - return 0 - - rclpy.init() - [t.start() for t in threads] - application.run() - - logger.info("Waiting on threads to stop.") - [t.join() for t in threads] - rclpy.shutdown() - - -if __name__ == "__main__": - logging.basicConfig(format="%(levelname)s: %(asctime)s %(filename)s %(funcName)s %(message)s", datefmt="%Y%m%d:%H:%M:%S %p %Z") - logging.getLogger().setLevel(logging.INFO) - main() diff --git a/teleop/htm/plot_training_sessions_map/draw_training_sessions.py b/teleop/htm/plot_training_sessions_map/draw_training_sessions.py new file mode 100644 index 00000000..5e793cb8 --- /dev/null +++ b/teleop/htm/plot_training_sessions_map/draw_training_sessions.py @@ -0,0 +1,273 @@ +import os +import zipfile +import glob +import pandas as pd +import json + +# from gpx_converter import Converter +import glob +import folium +import pandas as pd +import pathlib +import shutil +import logging + +logger = logging.getLogger(__name__) + +log_format = "%(levelname)s: %(asctime)s %(filename)s %(funcName)s %(message)s" + + +# Filtered columns that will be used in the resultant .CSV file. +KEEP_COL = ["x_coord", "y_coord", "vehicle_conf"] + +TRAINING_SESSION_LOCATION = "/sessions/autopilot/" + +# Get all the compressed files in a folder +ZIP_FILES_LOCATION = glob.glob(f"{TRAINING_SESSION_LOCATION}**/*.zip", recursive=True) + +# To store the name of the day the session took place in it. +SESSION_DATE = [] + +# To store the directory of the session after being moved to static/ folder. +SESSION_HTML_DIRECTORY = "" + +# List for the location (in absolute path) of the resultant .CSVs made in a day. +MERGED_CSV_FILES_LOCATION = [] + +# Store the folder of sessions +CSV_FILES_LOCATION = [] + +CURRENT_DIRECTORY = os.path.dirname(os.path.abspath(__file__)) + + +def return_absolute_path(additional_path): + return CURRENT_DIRECTORY + "/" + additional_path + + +class FindCSV: + + """Find the .ZIP compressed folder and get the .CSVs from it + then move them to a folder with the name being the date of training session "2023Apr06" + """ + + def extract_files_by_extension(self): + global SESSION_DATE + SESSION_DATE.clear() + for file in ZIP_FILES_LOCATION: + filename_only = os.path.basename(file) + date_from_filename_only = filename_only.split("T")[0] + # To get the date of session only, from the CSV file + if len(SESSION_DATE) == 0: + SESSION_DATE.append(date_from_filename_only) + elif not date_from_filename_only in SESSION_DATE[-1]: + SESSION_DATE.append(date_from_filename_only) + + self.create_sessions_folder(SESSION_DATE[-1]) + + self.store_sessions_folder() + + try: + with zipfile.ZipFile(file, "r") as zip_ref: + # Get all the files in the chosen compressed file + for file_info in zip_ref.infolist(): + if file_info.filename.endswith(".csv"): + try: + extracted_path = zip_ref.extract( + file_info, + path=os.path.join( + CURRENT_DIRECTORY, SESSION_DATE[-1] + ), + ) + except Exception as e: + print( + f"Error occurred while extracting {file_info.filename}: {e}" + ) + continue + except zipfile.BadZipFile as bz: + logger.info(f"Error: The zip file is invalid or corrupted: {bz}") + except Exception as e: + logger.info(f"Error occurred while processing the zip file: {e}") + + def store_sessions_folder(self): + global SESSION_DATE + """store in "CSV_FILES_LOCATION" the location for the training sessions + where the .CSV files are moved from the compressed file of training sessions + """ + if len(CSV_FILES_LOCATION) == 0: + CSV_FILES_LOCATION.append( + "{0}/{1}".format( + pathlib.Path(__file__).parent.resolve(), SESSION_DATE[-1] + ) + ) + elif not SESSION_DATE[-1] in CSV_FILES_LOCATION[-1]: + CSV_FILES_LOCATION.append( + "{0}/{1}".format( + pathlib.Path(__file__).parent.resolve(), SESSION_DATE[-1] + ) + ) + + def create_sessions_folder(self, session_data): + session_directory = os.path.join(CURRENT_DIRECTORY, session_data) + """Create a folder with the date of the sessions""" + if not os.path.exists(session_directory): + os.makedirs(session_directory) + + +class ProcessCSVtoGPX: + """Make one (cleaned) .CSV file that will have all the data in it.""" + + def __init__(self): + self.df = "" + self.merged_csv_files = "" + # Main list to have all the .CSV files as their own list in the training session folder + self.dataframe_data = [] + + def create_resultant_CSV(self): + """Create a main .CSV file that will have data from all the .CSV that are founded in a training session.""" + + # keep only the coordinates columns in them + for folder in zip(CSV_FILES_LOCATION, SESSION_DATE): + self.dataframe_data.clear() + CSV_files = glob.glob(f"{folder[0]}/*.csv") + for file in CSV_files: + # Read every .CSV file in the folder + read_file = pd.read_csv(file) + new_file = read_file[KEEP_COL] + self.dataframe_data.append(new_file) + + self.df = pd.concat(self.dataframe_data, ignore_index=True) + self.clean_dataframe() + self.convert_to_CSV(folder[1]) + # self.convert_to_GPX(folder[1]) + + def clean_dataframe(self): + """Remove duplication from the table.""" + old_count = self.df.shape[0] + self.df = self.df.drop_duplicates() + logger.info( + f"Removed the duplicates in the data from {old_count} to {self.df.shape[0]}" + ) + + def convert_to_CSV(self, session_data): + self.merged_csv_files = f"{session_data}\\resultant_{session_data}.csv" + MERGED_CSV_FILES_LOCATION.append(return_absolute_path(self.merged_csv_files)) + self.df.to_csv(MERGED_CSV_FILES_LOCATION[-1], index=False) + logger.info(f"CSV file saved as {self.merged_csv_files}") + + +class PlotMap: + """Create a map with the points from training sessions in it.""" + + def create_marker(self, row, map_obj): + """Add markers on top of the points that are plotted on the map + + Args: + row (dataframe): A dataframe that is passed as row with x and y coordinates in it + """ + folium.Marker(location=[row["x_coord"], row["y_coord"]]).add_to(map_obj) + + def plot_map(self): + for file in zip(MERGED_CSV_FILES_LOCATION, SESSION_DATE): + dataframe = pd.read_csv(file[0]) + latitude = dataframe[KEEP_COL[1]] + longitude = dataframe[KEEP_COL[0]] + # Create a map object + map_obj = folium.Map( + location=[longitude.mean(), latitude.mean()], + zoom_start=12, + max_zoom=22, + ) + + # Apply the create_marker function to each 10th row + dataframe.iloc[::40].apply(self.create_marker, axis=1, args=(map_obj,)) + + # Create a 2D list from the two columns + coordinates = [ + [column1, column2] + for column1, column2 in zip(longitude.tolist(), latitude.tolist()) + ] + + self.draw_map_line(map_obj, coordinates) + + # Save the map to an HTML file + training_map_folder = CURRENT_DIRECTORY + "/training_maps" + map_path = os.path.join(training_map_folder, f"{file[1]}.html") + os.makedirs(os.path.dirname(map_path), exist_ok=True) + map_obj.save(map_path) + logger.info(f"created the training route of session {file[1]}") + + def draw_map_line(self, map_obj, coordinates): + """Draw line between the points that are plotted on the map + + Args: + map_obj: A Created map with Folium and Leaflet.js + coordinates (list [int]): 2D list with x and y coordinates for the training session + """ + folium.PolyLine( + locations=coordinates, color="blue", weight=2.5, opacity=1 + ).add_to(map_obj) + + +class MapFolder: + def move_folder(self): + global SESSION_HTML_DIRECTORY + """Copy the map .HTML file from the '/plot_training_sessions_map' folder to the static folder, to be discoverable by Flask""" + source_dir = os.path.join( + os.path.dirname(CURRENT_DIRECTORY), + "plot_training_sessions_map", + "training_maps", + ) + + target_dir = os.path.join( + os.path.dirname(CURRENT_DIRECTORY), "static", "training_maps" + ) + + # CURRENT_DIRECTORY + "/../static/training_maps" + + file_names = os.listdir(source_dir) + try: + os.makedirs(target_dir) + except OSError: + # The directory already existed, nothing to do + pass + for file_name in file_names: + shutil.copy(os.path.join(source_dir, file_name), target_dir) + logger.info("moved the map folder successfully") + SESSION_HTML_DIRECTORY = target_dir + + +class GenerateHTML: + """Make a list of the sessions Date as the key, and the value is the directory for the .html file generated by Folium""" + + def make_file(self): + sent_list = {} + for x in SESSION_DATE: + new_key = x # The date of sessions + new_age = ( + SESSION_HTML_DIRECTORY.replace("app/htm/static", "static/") + + f"/{x}.html" + ) # Generate the .html file from the date + sent_list[new_key] = new_age # Add it to a list + json_sent_list = json.dumps(sent_list) # convert the list to JSON + return json_sent_list + + +def draw_training_sessions(): + logger.info(f"Autopilot folder{os.listdir(TRAINING_SESSION_LOCATION)}") + logger.info(f"These are the found {ZIP_FILES_LOCATION}") + FindCSV().extract_files_by_extension() + ProcessCSVtoGPX().create_resultant_CSV() + PlotMap().plot_map() + MapFolder().move_folder() + training_sessions_directory_json = GenerateHTML().make_file() + logger.info( + f"JSON list before sending the respond to app.py {training_sessions_directory_json}" + ) + print(training_sessions_directory_json) + return training_sessions_directory_json + + +if __name__ == "__main__": + logging.basicConfig(format=log_format, datefmt="%Y%m%d:%H:%M:%S %p %Z") + logging.getLogger().setLevel(logging.INFO) + draw_training_sessions() diff --git a/teleop/htm/plot_training_sessions_map/requirements.txt b/teleop/htm/plot_training_sessions_map/requirements.txt new file mode 100644 index 00000000..6edb1910 Binary files /dev/null and b/teleop/htm/plot_training_sessions_map/requirements.txt differ diff --git a/teleop/logbox/store.py b/teleop/logbox/store.py index b3948343..b94fbc58 100644 --- a/teleop/logbox/store.py +++ b/teleop/logbox/store.py @@ -153,13 +153,19 @@ def create_event(self, event): timestamp = event.timestamp steering = float(event.steering) desired_speed = float(event.desired_speed) - heading = float(event.heading) + + # Handle possible non-numeric 'heading' value + try: + heading = float(event.heading) + except ValueError: + heading = 0.0 # Default to 0.0 or another appropriate fallback value + throttle = float(event.throttle) steer_src = event.steer_src filename = "{}__st{:+2.2f}__th{:+2.2f}__dsp{:+2.1f}__he{:+2.2f}__{}.jpg".format(str(timestamp), steering, throttle, desired_speed, heading, str(steer_src)) with zipfile.ZipFile(self._zip_file_at_write(), mode="a", compression=0) as archive: archive.writestr(filename, BytesIO(np.frombuffer(memoryview(event.jpeg_buffer), dtype=np.uint8)).getvalue()) - # + self._data.loc[len(self._data)] = [ timestamp, event.vehicle,