ECE 6102: Dependable Distributed Systems

Some Past Projects


You are encouraged to develop your own project ideas and discuss them with me. The constraints are that it must involve distributed computing, it must entail substantial C/C++ and/or Java programming (no primarily Perl, Tcl, or Visual-Basic projects accepted), and it must have a significant part dealing with security, fault tolerance, or general robustness. Primarily Python projects will not be specifically disallowed but will be carefully scrutinized. Cloud-based projects are strongly encouraged.
  1. Reliable Stream Analysis on the Internet of Things

    Live stream video processing for resource-poor devices. The final report for this project is included as an example of an "A" project.

    IOT Stream Analysis Final Report
  2. Dependable Storage: Fault-tolerant Secure Distributed File System

    Block-oriented distributed storage with replication and encryption.
  3. GeoShare: Geographically-Distributed and Secret Shared Data Storage in the Cloud

    Secure cloud storage built on top of Amazon S3 with secret shares distributed across Amazon regions world-wide.
  4. Virtualization: Support for Software Replication

    Modification of a virtual machine monitor to provide totally ordered message delivery, deterministic execution, etc., in order to provide support for software replication. Demonstration of a simple replicated process execution on the chosen platform.

  5. Dependable Storage: Secure FTP Repository

    Design and implementation of a distributed FTP repository using proactive secret sharing techniques to maintain confidentiality and integrity. Includes an approach to deal with nearly-coincident attacks on all repository sites.
  6. Mobile Devices: File System Verification and Repair on Resource-Constrained Devices

    Maintain distributed integrity information about a mobile device file system, detect unauthorized changes to file system, and repair corrupted files with help of other devices and/or servers.

  7. Distributed Whois Crawler

    A distributed crawler that gathers data through domain information groper (dig) requests to whois servers. The project builds upon a Chord architecture to provide redundant archives of the gathered data across multiple resources. A convenient, RESTful interface allows querying of the gathered data and coordinates with Chord to gather archived data.
  8. A Dynamic Replica Management System for HDFS

    Dynamically change the number of replicas of a file in the Hadoop Distributed File System, based on the usage and/or importance of each file.