S.R.S

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

G H Raisoni College of Engineering and Management, Pune

(An Autonomous institute affiliated by SPPU)

B.TECH FINAL YEAR PROJECT

S.R.S

Guided by: Prof. Pramod Dhamdhere


Group ID: Group 5

Submitted by: Eklavya Kirote A21


Shashank Salgarkar A59
Shubham Chole A65
Sneha Gaware A66
Software Requirements Specification (SRS) Document
AI System for Converting Hand Drawings into Generative
Images using Neural Cellular Automata
Table of Contents

1. Introduction
2. System Overview
3. Functional Requirements
4. Non-Functional Requirements
5. System Architecture
6. Data Flow Diagram
7. User Interface Design
8. System Testing
9. Conclusion
10. References

1. Introduction
This Software Requirements Specification (SRS) document outlines the requirements for the
development of an AI system designed to convert hand drawings into generative images using
Neural Cellular Automata (NCA). The system aims to provide users with a seamless interface to
transform hand-drawn sketches into visually appealing generative art pieces.
The project focuses on the development of an innovative Artificial Intelligence (AI) system
aimed at transforming hand-drawn sketches into generative images through the utilization of
Neural Cellular Automata (NCA). Traditional methods of image generation often require
intricate digital tools or significant manual intervention. However, this project seeks to
revolutionize the creative process by offering a streamlined solution that harnesses the power of
deep learning and generative algorithms. The motivation behind this project stems from the
increasing demand for creative tools that bridge the gap between traditional artistry and modern
technology. By enabling users to translate their hand-drawn sketches into digital artwork
effortlessly, the system aims to democratize the creation of visually captivating images.
Additionally, the integration of Neural Cellular Automata introduces an element of
unpredictability and creativity, resulting in unique and expressive generative art pieces. Through
this project, we endeavor to explore the intersection of artificial intelligence, computational
creativity, and artistic expression. By leveraging cutting-edge technologies and methodologies,
we aim to empower users with a novel tool that not only simplifies the process of image
generation but also fosters creativity and exploration in the digital realm.

In today's digital age, the fusion of art and technology has opened up new avenues for creative
expression. However, despite the plethora of digital tools available, many artists still prefer the
tactile experience of sketching by hand. Recognizing this preference, our project seeks to bridge
the gap between traditional and digital art mediums by offering a seamless transition from hand-
drawn sketches to generative images. Furthermore, the project is inspired by the growing interest
in generative art, where algorithms play a pivotal role in the creative process. By leveraging
Neural Cellular Automata, a dynamic and self-organizing system, we aim to push the boundaries
of traditional image generation methods. The inherent complexity and adaptability of NCAs
allow for the creation of visually striking and conceptually rich artwork, adding a layer of depth
and intrigue to the generative process. Moreover, the project aligns with the broader trend of
democratizing access to AI technologies. By developing an intuitive interface and implementing
robust algorithms, we hope to make the transformative power of AI accessible to a wider
audience, including artists, designers, and enthusiasts alike. Through this democratization, we
aim to foster a vibrant community of creators who can explore, experiment, and innovate with
our AI-driven image generation system.

In summary, our project represents a convergence of art, technology, and accessibility. By


leveraging Neural Cellular Automata, we seek to empower users to unleash their creativity and
bring their artistic visions to life in ways that were previously unimaginable. Through this
endeavor, we hope to inspire new forms of artistic expression and catalyze innovation in the
ever-evolving landscape of digital art.

2. System Overview
The proposed system will utilize advanced artificial intelligence techniques, particularly Neural
Cellular Automata, to interpret hand-drawn sketches and generate corresponding images. Users
will interact with the system through a user-friendly interface, providing input in the form of
hand-drawn sketches. The system will then process this input using deep learning algorithms to
generate high-quality generative images. The proposed system is a cutting-edge AI platform
designed to seamlessly convert hand-drawn sketches into intricate generative images using
Neural Cellular Automata (NCA). At its core, the system leverages the power of deep learning
algorithms to interpret and transform input sketches into visually captivating artworks. By
integrating advanced AI technologies with traditional artistic methods, the system aims to
democratize the process of digital image generation. Users will have the opportunity to express
their creativity through hand-drawn sketches, which will then be translated into dynamic
generative images with the help of the NCA model. Key components of the system include an
Input Processing Module, which handles the interpretation and preprocessing of hand-drawn
sketches, and a Neural Cellular Automata Module, responsible for generating images based on
the processed input. Additionally, an Output Generation Module provides users with options to
preview, adjust, and export the generated images. Through an intuitive user interface, the system
offers a user-friendly experience, allowing artists, designers, and enthusiasts to effortlessly
navigate the image generation process. By providing customizable options for input parameters
and NCA configurations, the system caters to a wide range of artistic styles and preferences.
Overall, the system represents a fusion of artistry and technology, enabling users to explore new
creative horizons and produce visually stunning generative artworks with ease. By harnessing the
power of AI and Neural Cellular Automata, the system opens doors to endless possibilities in the
realm of digital artistry.

Neural Cellular Automata (NCA) is a computational framework inspired by traditional cellular


automata models, which consist of a grid of cells, each with a state that evolves over discrete
time steps based on a set of rules. However, unlike traditional cellular automata, which operate
deterministically, NCA introduces a neural network-based approach, where the state of each cell
is influenced not only by its neighbors but also by the activation of a neural network.
In NCA, the grid of cells represents an image, with each cell corresponding to a pixel. The neural
network associated with the NCA model processes the input image and determines the state
transition rules for each cell. This allows for the generation of dynamic and complex patterns that
go beyond the simple binary states of traditional cellular automata. NCA models are typically
trained using a dataset of input-output pairs, where the input is an initial image configuration,
and the output is the desired state of the image after a certain number of iterations. During
training, the neural network learns to extract features from the input images and predict the
evolution of the cellular automaton over multiple time steps. One of the key advantages of NCA
is its ability to generate visually appealing and conceptually rich patterns through the interaction
of simple local rules. By combining the power of neural networks with the parallelism and
emergent behavior of cellular automata, NCA offers a flexible framework for exploring complex
dynamical systems and generating artistic and scientific simulations. In the context of image
generation, NCA can be used to create generative artworks by interpreting input sketches as
initial image configurations and iteratively evolving them using the learned transition rules.

3. Functional Requirements
3.1 Input Processing

The input processing module is responsible for accepting and preprocessing hand-drawn
sketches provided by the user. It ensures that the input data is appropriately formatted and
prepared for further processing by the Neural Cellular Automata (NCA) module.

FR1:-The system shall accept hand-drawn sketches as input through a user-friendly interface.

FR2:- Input sketches shall undergo preprocessing to extract relevant features, such as lines,
shapes, and contours, using image processing techniques.

FR3:- The preprocessing module shall handle noise reduction and image enhancement to
improve the quality and clarity of the input sketches.

FR4:- Users shall have the option to adjust input parameters, such as line thickness or color
intensity, to customize the preprocessing process according to their preferences.

FR5:-The input processing module shall validate the input data to ensure compatibility with the
NCA module, including image size, format, and resolution.

FR6:-In case of errors or invalid input, the system shall provide appropriate feedback to the user
and prompt them to rectify the issue.

3.2 Neural Cellular Automata


The Neural Cellular Automata (NCA) component of the system is responsible for generating
generative images based on the processed input sketches. This module utilizes a combination of
traditional cellular automata principles and neural network-based computation to produce
visually captivating and conceptually rich artworks.

FR4: The system shall utilize Neural Cellular Automata to generate images from processed input
sketches.

FR5: Users shall have the option to choose different NCA configurations, such as the number of
iterations, neural network architecture, and activation functions.

FR6: The system shall train the NCA model using a dataset of hand-drawn sketches and
corresponding generative images to learn the transition rules and patterns.

FR7: The NCA module shall dynamically evolve the initial image configuration over multiple
time steps based on the learned transition rules and neural network activations.

FR8: Users shall have the ability to preview and adjust the generated images before finalizing
them.

FR9: The system shall provide options for exporting the generated images in various formats,
such as PNG, JPEG, or SVG.

The Neural Cellular Automata module plays a crucial role in the image generation process,
leveraging the synergy between traditional cellular automata principles and neural network-based
computation to create dynamic and visually striking generative artworks. By offering users the
flexibility to customize NCA configurations and providing options for previewing and exporting
generated images, the system empowers users to explore and experiment with different artistic
styles and expressions.

3.3 Output Generation


The Output Generation module of the system is responsible for producing the final generative
images based on the processed input sketches and the output of the Neural Cellular Automata
(NCA) module. This component ensures that users can interact with and manipulate the
generated images before finalizing them for export.

FR7: The system shall generate generative images based on processed input sketches using the
trained NCA model.

FR8: Users shall have the ability to preview the generated images to assess their quality and
make adjustments if necessary.

FR9: The system shall provide options for users to adjust parameters related to the output
images, such as resolution, size, and color depth.

FR10: Users shall have the capability to apply filters, effects, or other modifications to the
generated images to enhance their visual appeal.

FR11: The system shall support the export of generated images in various formats, including but
not limited to PNG, JPEG, and SVG.

FR12: Users shall have the option to save the generated images locally or share them directly via
social media platforms or other communication channels.

The Output Generation module ensures that users have full control over the final output of the
system, allowing them to fine-tune and customize the generated images to meet their artistic
vision. By offering a range of options for previewing, adjusting, and exporting the images, the
system empowers users to create unique and visually captivating artworks with ease.

4. Non-Functional Requirements
4.1 Performance

The performance of the system is crucial to ensure smooth and efficient operation, particularly
during the processing and generation of images. Here are the key aspects of performance
requirements:

NFR1: Efficiency:-The system should efficiently process input sketches and generate images
within a reasonable timeframe to provide a seamless user experience. The processing time should
be optimized to minimize latency and ensure prompt feedback to the user.

NFR2: Scalability: - The system should be capable of handling a varying number of users and
input sketches simultaneously without experiencing degradation in performance. It should scale
horizontally to accommodate increased workload and user demand efficiently.

NFR3: Resource Utilization: - The system should utilize computational resources effectively,
making optimal use of CPU, memory, and other hardware resources. Resource usage should be
monitored and optimized to ensure efficient performance without excessive consumption.

NFR4: Response Time:-The system should respond to user interactions, such as uploading
sketches, adjusting parameters, and previewing images, with minimal delay. Response times
should be consistent and within acceptable limits to maintain user engagement and satisfaction.

NFR5:-Reliability: - The system should operate reliably under normal conditions and withstand
occasional spikes in user activity or system load. It should be resilient to failures and errors,
ensuring continuous availability and uninterrupted functionality.

NFR6: Scalability: - The system should be designed to scale seamlessly with growing user
demand and data volume. It should support horizontal scaling by deploying additional resources
and distributing workload effectively across multiple nodes or instances.

NFR7: Load Handling: - The system should be capable of handling a large volume of
concurrent user requests and processing intensive computational tasks without experiencing
performance degradation. Load balancing mechanisms should be in place to distribute workload
evenly across resources.

NFR8: Optimization: - The system should be continuously optimized to improve performance,


efficiency, and resource utilization. Performance bottlenecks should be identified and addressed
through optimization techniques, such as code refactoring, algorithm optimization, and
infrastructure tuning.

Ensuring optimal performance is essential to meet user expectations and deliver a satisfying user
experience. By addressing performance requirements effectively, the system can operate
efficiently, reliably, and responsively, supporting the seamless processing and generation of
images from hand-drawn sketches
4.2 Usability
Usability refers to the ease of use and user-friendliness of the system interface, ensuring that
users can interact with the system intuitively and efficiently. Here's a brief overview of usability
requirements:

NFR4: Intuitive Interface:-The system should feature an intuitive and easy-to-navigate


interface, allowing users to perform tasks without the need for extensive training or guidance.
Elements such as clear labels, logical layouts, and intuitive controls contribute to the overall
usability of the interface.

NFR5: Clear Instructions:- The system should provide clear and concise instructions to guide
users through the various functionalities and features. Instructions should be presented in a user-
friendly manner, using plain language and visual aids where applicable, to enhance
comprehension and usability.

NFR6: Multilingual Support:- The system should support multiple languages to accommodate
users from diverse linguistic backgrounds. Language options should be easily accessible and
configurable, allowing users to interact with the system in their preferred language for improved
usability and accessibility.

NFR9: Consistency: - The system interface should maintain consistency in design, layout, and
interaction patterns across different screens and modules. Consistency enhances usability by
reducing cognitive load and allowing users to predict the behavior of interface elements
intuitively.

Usability plays a critical role in ensuring user satisfaction and engagement with the system. By
prioritizing intuitive design, clear instructions, multilingual support, and consistency, the system
can enhance usability and provide users with a seamless and enjoyable experience.

4.3 Reliability
Reliability refers to the system's ability to perform consistently and predictably under various
conditions, ensuring that it operates as expected without failures or disruptions. Here's a brief
overview of reliability requirements:

NFR7: Data Integrity and Security:-The system should maintain the integrity and security of
user data, ensuring that sensitive information is protected from unauthorized access,
manipulation, or loss. Robust data encryption, access controls, and authentication mechanisms
should be implemented to safeguard data confidentiality and integrity.

NFR8: Error Handling: - The system should be equipped with robust error handling
mechanisms to detect, report, and recover from errors and exceptions effectively. Error messages
should be informative and actionable, guiding users on how to resolve issues and mitigate
potential disruptions to system operation.

NFR9: Resilience:-The system should be resilient to system failures, crashes, and disruptions,
ensuring continuous availability and uninterrupted operation. Redundancy, failover mechanisms,
and disaster recovery procedures should be in place to minimize downtime and maintain service
reliability.

Reliability is essential for building trust and confidence in the system among users, stakeholders,
and other stakeholders. By prioritizing data integrity and security, implementing robust error
handling mechanisms, and ensuring system resilience, the system can deliver a reliable and
dependable user experience, fostering user satisfaction and trust in the system's performance and
reliability.

5. System Architecture
The system architecture outlines the high-level structure and components of the AI-based image
generation system. It defines how various modules and components interact to fulfill the system's
functional and non-functional requirements. Here's an overview of the system architecture:

Input Processing Module:- This module is responsible for accepting and preprocessing hand-
drawn sketches provided by users. It extracts relevant features from the input sketches and
prepares them for further processing by the Neural Cellular Automata (NCA) module.

Neural Cellular Automata (NCA) Module:- The NCA module generates generative images
based on the processed input sketches. It utilizes a combination of traditional cellular automata
principles and neural network-based computation to produce dynamic and visually appealing
artworks.

Output Generation Module:- This module produces the final generative images based on the
output of the NCA module. It allows users to preview, adjust, and customize the generated
images before finalizing them for export. It also supports the export of generated images in
various formats.

User Interface:- The user interface provides an intuitive and user-friendly platform for users to
interact with the system. It includes features such as uploading sketches, adjusting parameters,
previewing images, and exporting final artworks. The interface is designed to be accessible and
responsive, catering to users from diverse backgrounds and skill levels.

Data Management System:- This component manages the storage and retrieval of data used by
the system, including input sketches, training data for the NCA model, and generated images. It
ensures data integrity, security, and efficient access to support the system's functionality.

Training Pipeline:- The training pipeline is responsible for training and updating the NCA
model using a dataset of hand-drawn sketches and corresponding generative images. It
incorporates machine learning algorithms and techniques to optimize the model's performance
and accuracy over time.

Infrastructure and Deployment:- The system architecture is supported by a scalable and


resilient infrastructure that includes servers, storage systems, networking components, and other
resources. It can be deployed on-premises or in the cloud, leveraging cloud computing services
for scalability, flexibility, and cost-effectiveness.

Overall, the system architecture is designed to facilitate the seamless processing and generation
of images from hand-drawn sketches, leveraging advanced AI techniques and technologies to
deliver high-quality and visually appealing generative artworks. It ensures reliability, scalability,
and performance to meet the diverse needs and expectations of users.

6. Data Flow Diagram


7. User Interface Design

The user interface (UI) design is crucial for ensuring a seamless and intuitive user experience,
allowing users to interact with the system's functionalities effectively. Here's an overview of the
user interface design considerations:

Sketch Upload:- The UI should provide a user-friendly interface for users to upload hand-drawn
sketches easily. It should support various file formats and provide feedback on successful
uploads.

Parameter Adjustment: -Users should have the ability to adjust input parameters, such as line
thickness, color intensity, or other settings, to customize the preprocessing process. The UI
should include intuitive controls for parameter adjustment, such as sliders, dropdown menus, or
input fields.

Preview and Adjustment: - The UI should allow users to preview the generated images and
make adjustments as needed before finalizing them for export. It should provide interactive tools
for zooming, panning, and rotating images, as well as options for applying filters, effects, or
other modifications.

Export Options: - The UI should offer options for exporting the final generative images in
various formats, such as PNG, JPEG, or SVG. It should provide clear instructions and prompts
for exporting images and support batch exporting for multiple images.

Feedback and Notifications: - The UI should provide informative feedback and notifications to
users throughout the image generation process. This includes feedback on successful actions,
error messages for invalid inputs or errors, and progress indicators for lengthy processes.

Accessibility:- The UI should be designed with accessibility in mind, ensuring that users with
disabilities can access and interact with the system effectively. This includes providing
alternative text for images, keyboard navigation support, and compatibility with screen readers
and other assistive technologies.

Consistency and Clarity: - The UI should maintain consistency in design, layout, and
interaction patterns to enhance usability and user satisfaction. Clear labels, logical grouping of
elements and intuitive navigation paths contribute to a positive user experience.

Responsive Design: - The UI should be responsive and adaptive, ensuring a consistent user
experience across different devices and screen sizes. It should adjust dynamically to
accommodate changes in screen orientation, resolution, or aspect ratio.

By incorporating these design principles and considerations, the user interface can facilitate a
smooth and enjoyable user experience, enabling users to interact with the system's functionalities
effortlessly and achieve their desired outcomes effectively.
8. System Testing

8.1 Unit Testing

Unit testing is a fundamental aspect of software testing that focuses on testing individual
components or units of code in isolation to ensure their correctness and functionality. In the
context of the AI-based image generation system, unit testing involves testing the functionality
of each module or component independently to verify that it behaves as expected.

Component Isolation: - Each module or component of the system is tested in isolation, without
considering its interactions with other modules or external dependencies. This allows for precise
identification and isolation of defects within specific code units.

Test Case Development: - Test cases are developed to cover various scenarios and edge cases,
including normal inputs, boundary conditions, and error conditions. These test cases are designed
to exercise different paths through the code and validate the behavior of individual functions or
methods.

Test Execution: - Test cases are executed using automated testing frameworks or tools, which
run the tests and report the results automatically. This enables efficient and systematic testing of
code units, allowing for rapid identification and resolution of defects.

Assertion and Validation: - During test execution, assertions are used to validate the expected
behavior of code units against predefined criteria. If the actual output of a code unit matches the
expected output, the test case passes; otherwise, it fails, indicating a potential defect.

Debugging and refactoring: - If a test case fails, developers can use the test results to identify
and debug the underlying issues in the code. Once defects are resolved, the code may undergo
refactoring to improve its design, readability, and maintainability.

Unit testing is essential for ensuring the reliability, robustness, and maintainability of the systems
codebase. By systematically testing individual components in isolation, developers can identify
and address defects early in the development process, minimizing the risk of software failures
and ensuring the overall quality of the system.
8.2 Integration Testing

Integration testing is a crucial phase of software testing that focuses on verifying the interactions
and interfaces between different modules or components of the system. In the context of the AI-
based image generation system, integration testing involves testing the integration points
between various modules to ensure that they work together as intended.

Integration Scenarios: - Integration testing involves testing different integration scenarios,


including interactions between modules, communication between subsystems, and data exchange
between components. These scenarios cover both normal and exceptional conditions to validate
the robustness and reliability of the system.

Interface Testing: -Interface testing verifies the interfaces and interactions between modules,
ensuring that data is passed correctly, function calls are made successfully, and dependencies are
resolved properly. This includes testing input and output interfaces, API endpoints, and
communication protocols between components.

Dependency Management: - Integration testing verifies the management of dependencies


between modules, ensuring that all required dependencies are satisfied and that changes in one
module do not adversely affect other modules. This includes testing dependency injection,
version compatibility, and library integration.

End-to-End Testing: - Integration testing may also include end-to-end testing, which validates
the entire system's functionality from the user interface to the backend components. This ensures
that all modules work together seamlessly to fulfill user requirements and achieve the system's
objectives.

Mocking and Stubbing: - In integration testing, mock objects and stubs may be used to simulate
the behavior of external dependencies or unavailable components. This allows for controlled
testing of integration points and ensures that tests can be executed in isolation without relying on
external systems.

Regression Testing: - Integration testing includes regression testing to ensure that changes or
updates to one module do not introduce regressions or break existing functionality in other
modules. This involves retesting previously integrated components to verify that they still
function correctly after changes are made.

Integration testing plays a critical role in validating the interactions and interoperability of
different modules within the system. By systematically testing integration points, interface
interactions, and dependency management, integration testing helps ensure the overall reliability,
stability, and performance of the AI-based image generation system.
9. Conclusion

The proposed AI system for converting hand drawings into generative images using Neural
Cellular Automata offers a novel approach to creative expression. By leveraging advanced deep
learning techniques, the system aims to empower users to transform their artistic ideas into
visually stunning images. The development of the AI-based image generation system represents
a significant advancement in the field of computational creativity, offering users a powerful tool
for transforming hand-drawn sketches into visually stunning generative artworks. Through the
integration of advanced artificial intelligence techniques, including Neural Cellular Automata
(NCA), the system enables users to explore new creative horizons and express their artistic
visions in innovative ways. Throughout the development process, a comprehensive set of
functional and non-functional requirements were defined to guide the design, implementation,
and testing of the system. These requirements encompassed aspects such as input processing,
NCA-based image generation, usability, performance, reliability, and more, ensure that the
system meets the diverse needs and expectations of users. The system architecture was carefully
designed to facilitate the seamless processing and generation of images, leveraging a modular
and scalable design that enables flexibility, extensibility, and maintainability. Key components
such as the input processing module, NCA module, output generation module, and user interface
were developed and integrated to create a cohesive and functional system. Testing played a
crucial role in ensuring the quality and reliability of the system, with both unit testing and
integration testing being conducted to validate the functionality, performance, and
interoperability of different components. By systematically testing individual units and
integration points, developers were able to identify and address defects early in the development
process, minimizing the risk of software failures and ensuring the overall robustness of the
system. Moving forward, the AI-based image generation system holds promise for empowering
users to unleash their creativity and produce captivating artworks with ease. By providing a user-
friendly interface, powerful AI algorithms, and a seamless workflow, the system aims to
democratize the creation of generative art and inspire new forms of artistic expression in the
digital realm. With continued refinement and enhancement, the system has the potential to make
a significant impact on the fields of art, design, and technology, opening doors to endless
possibilities for creative exploration and innovation. In addition to the technical aspects, the
development of the AI-based image generation system also underscores broader implications and
potential applications across various domains. Here are some additional considerations: Artistic
Exploration The system fosters artistic exploration by providing users with a novel and
accessible tool for creating generative art. Artists, designers, and enthusiasts can experiment with
different styles, techniques, and concepts, pushing the boundaries of traditional artistry and
unlocking new creative potentials. Online communities, forums, and social media platforms can
serve as hubs for exchanging ideas, sharing techniques, and showcasing artworks generated
using the system

Overall, the development of the AI-based image generation system represents a convergence of
technology, art, and creativity, with far-reaching implications for society, culture, and human
expression. By harnessing the power of AI algorithms, the system empowers individuals to
explore their creative potential, connect with others, and contribute to the ever-evolving
landscape of digital art and innovation.
10. References

1. Title: Sketch Generation with RNN-based Variation Auto encoders


Authors: Yiwen Guo, Jianping Shi, Eric P. Xing
Link: https://ieeexplore.ieee.org/document/8516988/

2. Title: DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and
Caricature Modeling
Authors: Yi Yuan, Tong Sun, Chao Xu, Jianrui Cai, Xiangyang Ji
Link: IEEE Xplore

3. Title: Sketch-based 3D Shape Retrieval Using Convolutional Neural Networks


Authors: Haolin Chen, Guoxian Dai, Chenyang Zhu, Junwei Han, Yongtao Wang
Link: https://ieeexplore.ieee.org/document/9190963/

4. Title: Sketch-Based Image Retrieval via Siamese Convolutional Neural Network


Authors: Yonggang Qi, Yi-Zhe Song, Honggang Zhang, Jun Liu
Link: https://ieeexplore.ieee.org/document/7532801/

5. Title: Sketch Generation Using Recurrent Variation Auto encoders with Spatial Attention
Authors: Haoyu Wu, Li Tan, Lei Zhu, Qiu Chen
Link: https://ieeexplore.ieee.org/document/9163112/

6. Title: Sketch Recognition with Multi-Scale Convolutional Neural Networks


Authors: Huiqi Li, Hongbin Zhang, Wenjuan Gong, Xiangyang Ji
Link: IEEE Xplore

7. Title: Sketch-Based Image Retrieval via Deep Learning: Challenges and Solutions
Authors: Xiaoqiang Li, Shijian Lu, Minsi Wang, Hongfu Liu
Link: IEEE Xplore

You might also like