National Intern Day Spotlight: Nithin Sivakumar
My Internship Journey at Antaris: Building the Future of Satellite Software
By Nithin Sivakumar
When I first heard about Antaris, I was immediately drawn to their mission of revolutionizing satellite software through SatOS™. The idea of working on technology that could simplify the design, simulation, operation, and delivery of satellite constellations was incredibly exciting. Little did I know that my internship would involve building critical components of their satellite imaging pipeline—a project that would challenge me technically while teaching me invaluable lessons about collaboration and problem solving.
The Challenge: Processing Satellite Images in Space
Imagine you're a satellite orbiting Earth, and your camera has just captured a high-resolution image of the planet below. Before you can send this valuable data back to Earth, you need to answer some critical questions: Is this image actually useful, or is it obscured by clouds? How can we compress it efficiently to save precious bandwidth? And how do we ensure the data remains secure during transmission?
This was the core challenge I tackled during my internship at Antaris. I worked on developing a comprehensive image processing pipeline that would handle satellite imagery from capture to secure transmission—a system that could make real-time decisions about image quality and eff iciently prepare data for downlink.
Part 1: The Image Processing and Compression Pipeline
The first major component I worked on was the image processing, compression, and encryption pipeline. This system needed to handle the technical challenges of processing images in space while maintaining quality and ensuring security.
I built a multi-stage pipeline using several cutting-edge technologies:
OpenCV Processing: The pipeline begins with OpenCV's `cv::imread()` function, which loads the raw image from disk and decodes it into a `cv::Mat` object. The image is stored in BGR (Blue-Green-Red) format by default, which is OpenCV's standard color space. The `ImageProcessor` class provides methods to extract the raw pixel data as a contiguous byte array, along with the image dimensions (width, height, and channels).
A critical step is the `convertBGRtoRGB()` function, which uses `cv::cvtColor()` to transform the image from BGR to RGB format. This conversion is essential because nvJPEG expects RGB input, while OpenCV stores images in BGR format. The conversion creates a new `cv::Mat` object with the RGB data, ensuring compatibility with the next stage of the pipeline.
GPU-Accelerated Compression: The compression stage is where the real computational heavy lifting occurs. The nvJPEG library provides hardware-accelerated JPEG encoding that leverages NVIDIA's specialized image processing units.
The process begins with memory allocation on the GPU. For each color channel (Red, Green, Blue), I allocated separate GPU memory buffers using cudaMalloc(). This planar format (separate memory for each channel) is required by nvJPEG, unlike the interleaved format (RGBRGBRGB...) that comes from OpenCV.
Data transformation and transfer is the next critical step. The interleaved RGB data from OpenCV must be converted to planar format on the CPU first, then copied to GPU memory using cudaMemcpy(). This involves:
Iterating through each pixel in the image
Extracting the R, G, and B components into separate arrays
Copying each planar array to its corresponding GPU memory buffer
The nvJPEG encoding process then takes over. I created a nvjpegImage_t structure that points to the GPU memory buffers, set the quality parameter (typically 70 for good compression ratio), and configure the chroma subsampling (CSS_444 for no color information loss). The nvjpegEncodeImage() function launches the actual encoding kernel on the GPU, which processes the image data in parallel across thousands of CUDA cores.
Finally, the compressed bitstream retrieval involves calling nvjpegEncodeRetrieveBitstream() twice: first to get the size of the compressed data, then to copy the actual JPEG bitstream from GPU memory back to CPU memory. The result is a standard JPEG file that can be transmitted efficiently.
AES Encryption: The final stage implements AES-256 encryption with GPU acceleration. The system supports both ECB (Electronic Codebook) and CBC (Cipher Block Chaining) modes, each with different security characteristics.
ECB Mode processes each 16-byte block independently. The aes256_ecb_encrypt_kernel CUDA kernel launches with one thread per block, where each thread:
Loads a 16-byte block into local memory
Performs 14 rounds of AES encryption (10 rounds for AES-128, 14 for AES-256)
Each round consists of: SubBytes (S-box substitution), ShiftRows (row permutation), MixColumns (column mixing), and AddRoundKey (XOR with round key)
Writes the encrypted block back to global memory
CBC Mode adds an initialization vector (IV) and chains blocks together for enhanced security. The aes256_cbc_encrypt_kernel implements this by:
For the first block: XORing the plaintext with the IV before encryption
For subsequent blocks: XORing the plaintext with the previous ciphertext block
This chaining prevents identical plaintext blocks from producing identical ciphertext blocks
The key expansion process happens on the CPU before kernel launch. The 256-bit key is expanded into 60 32-bit round keys using the AES key schedule algorithm. These expanded keys are copied to GPU constant memory using cudaMemcpyToSymbol(), making them accessible to all threads during encryption.
The entire pipeline is designed to be modular and efficient, with each component optimized for the satellite environment. The system can process images in real-time, compressing data efficiently and encrypting it for secure transmission.
Part 2: The Cloud Cover Detection System
Once the pipeline was in place, I moved on to the cloud cover detection system using machine learning. Satellite images often contain significant cloud coverage that makes them useless for analysis. Rather than wasting bandwidth transmitting cloudy images, we needed a way to automatically detect and filter them out.
I implemented a U-Net convolutional neural network—a sophisticated deep learning architecture that uses encoders and decoders to create precise segmentation maps. The model takes a satellite image and produces a binary mask showing exactly where clouds are located. By calculating the percentage of cloud coverage, the system can make intelligent decisions: if more than 30% of the image is cloudy, it gets discarded; otherwise, it proceeds to the next stage of processing.
Training this model was particularly challenging. I sourced both raw satellite imagery and corresponding cloud masks from Sentinel Hub, a powerful API-based service for accessing Earth observation data. It allowed me to programmatically retrieve Sentinel-2 imagery along with the Scene Classification Layer (SCL), which includes pixel-level annotations like cloud, snow, vegetation, and more. By filtering this data and converting it into binary masks, I created a clean training set for cloud segmentation. The final model uses a ResNet34 encoder with custom training parameters, achieving reliable cloud detection that can operate efficiently in the constrained environment of a satellite
The Technical Implementation
The project involved significant software engineering challenges. I worked with C++ for performance-critical components, Python for the machine learning pipeline, and CUDA for GPU acceleration. The system uses pybind11 to create seamless Python bindings for the C++ components, allowing for easy integration with the broader Antaris platform.
One of the most rewarding aspects was seeing how all the pieces came together. The cloud detection model feeds into the compression pipeline, which then feeds into the encryption system. The entire workflow is orchestrated through the Antaris Cloud Platform (ACP), where users can run simulations and see the complete pipeline in action.
The Antaris Team: Mentorship and Collaboration
What truly made this internship special was the incredible team at Antaris. Working with a globally distributed team across different time zones taught me valuable lessons about remote collaboration and communication.
Brian Waldon served as my primary mentor throughout the project. His approach to mentorship was particularly impactful—he never simply gave me the answers but instead guided me through the problem-solving process.
The broader team—including Deepak Tawri, Omkar Kulkarni, Shiv Singh, Rahul Bhivare, Sai Balaji, and Karthik Govindhasamy—created an environment where learning was encouraged and collaboration was natural.
Karthik Govindhasamy, who gave me this opportunity, exemplified Antaris's commitment to fostering talent. His vision for the company and his willingness to invest in interns speaks to Antaris's culture of innovation and growth.
Lessons Learned and Growth
This internship was transformative in several ways. Technically, I gained deep experience with GPU programming, machine learning deployment, and systems integration. I learned how to work with real-world constraints—limited computational resources, strict performance requirements, and the need for reliability in space applications.
But perhaps more importantly, I learned about the power of collaborative problem-solving. The team at Antaris showed me that the best solutions often come from diverse perspectives and open communication. Working across time zones taught me to be more intentional about documentation and communication, skills that will serve me well in any future role.
The Impact
The system I helped build is now part of Antaris's satellite software platform, contributing to their mission of simplifying space operations. When users connect to the Antaris Cloud Platform, they can run simulations that demonstrate the complete pipeline: capturing an image, detecting cloud coverage, processing and compressing the data, and encrypting it for secure transmission.
This isn't just a demonstration—it's a real system that could one day be deployed on actual satellites, helping to make space missions more efficient and cost-effective. The thought that my work could contribute to the future of space exploration is incredibly motivating.
Looking Forward
My time at Antaris has given me a unique perspective on the intersection of software engineering and space technology. The company’s focus on full-scale simulations represents a fundamental shift in how we approach satellite operations today, and being part of that innovation has been incredibly rewarding.
As I move forward in my career, I'll carry with me not just the technical skills I developed, but also the collaborative mindset and problem-solving approach I learned from the Antaris team. The experience has reinforced my passion for working on technology that can make a real difference in the world.
Conclusion
Interning at Antaris was more than just a technical experience—it was an opportunity to work on cutting-edge technology with a team that truly values innovation and collaboration. The project I worked on represents the kind of practical, impactful work that makes space missions more efficient and accessible.
For anyone considering an internship in the space industry, I'd encourage you to look beyond the technical challenges and consider the broader mission. At Antaris, I found a team that's not just building software—they're building the future of space exploration, one satellite at a time.
The combination of challenging technical work, supportive mentorship, and meaningful impact makes Antaris a truly special place to learn and grow. I'm grateful for the opportunity to have been part of their mission, and I'm excited to see how their technology continues to shape the future of space operations.
This blog post reflects my personal experience and the work I contributed during my internship at Antaris. The technical implementations described are part of Antaris's broader satellite software platform and represent collaborative efforts with the entire team.

