Recent Projects

PromSec: Prompt Optimization for Secure Generation of Functional Source Code with Large Language Models (LLMs)

Problem: LLMs tend to generate functional but insecure code.

Objective: Optimize prompts to generate secure functional code with LLMs.

Key Contributions:

Methodology: gGAN fixes code vulnerabilities, LLM improves prompts, and iterative loop for optimizing code security.

Key Results: Enhances security while maintaining functionality, reduces operational time and costs significantly, prompts work across languages and LLMs, reduces vulnerabilities in generated code.

PromSec Figure

Publications:

Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection

Problem: Need for a new adversarial attack on multiple instances to evade malicious domain detection.

Objective: Develop MintA, an attack that evades GNN-based malicious domain detection by optimizing multiple domain manipulations.

Key Contributions:

Methodology: Construct a surrogate model using black-box access, optimize node and edge perturbations to maximize evasiveness, and implement perturbations through domain and IP modifications.

Key Results: Achieves over 80% success rate in evading detection, bypasses outlier detection and graph purification defenses.

MintA Figure

Papers:

SA-DS: A Dataset for Large Language Model-Driven AI Accelerator Design Generation

Problem: Lack of specialized datasets for AI-driven accelerator design.

Objective: Introduce SA-DS for LLM-based DNN hardware design.

Key Contributions:

Methodology: Data from Gemmini in Chisel, supports LLM fine-tuning and multi-shot learning, quality check using Verilator.

Results: 100% pass rate with multi-shot learning, fewer revisions, better workflow.

SA-DS Figure

Papers:

Work 4: Deepfake Detection with Vision-Language Model Explanations and GNN Integration

Problem: Growing challenge of realistic deepfakes that threaten media authenticity. Existing methods struggle with robustness and generalization.

Objective: Combines visual and AI-generated textual analysis for improved detection.

Key Contributions:

Methodology:

Main Results: Enhanced Accuracy: Achieves higher recall, reducing false negatives. Stability Under Attack: Lowers performance drop from 13.3% to 1.5%. Scalable Solution: Effective across varied deepfake scenarios.

Note: Summary presented with care for privacy, as the paper is under double-blind review.

Deepfake Detection Illustration

Work 5: Semi-decentralized Inference in hetGNNs for Traffic Demand Prediction: An Edge-Computing Approach

Problem: Scalability issues in GNN-based traffic forecasting due to high message passing overhead.

Objective: Create a scalable, efficient traffic forecasting method using semi-decentralized hetGNN-LSTM.

Key Contributions:

Methodology:

Main Results: High accuracy compared to existing models. Inference time reduced by an order of magnitude. Improved scalability and reduced costs.

hetGNN Inference Illustration

Papers:

Back to Homepage