Uploaded by Luis Dourado

Requirements

advertisement
Requirements
Requirements
The system will need to enable connection to multiple gateways per customer, each
gateway having more than one connection type, each connection type giving
access to different sets of endpoints.
One customer might have multiple gateways of the same provider, so the credentials
should be associated with the gateway itself, maybe stored - treating it as a password with
some sort of asymmetric encryption, preferably with an elliptical curve cryptography like
Curve25519 - in a column. The first challenge here is that each gateway has a different
method of authentication, a lazy solution to this would be storing them in a JSON type
field but that could be proven to be a nightmare of typing if we don’t do the proper
validation of what goes into that field.
Maybe we could create a map of a given provider and it’s format and then check that on
the validation, assuring that the right type of credential are the only ones being inserted,
as we have different services for different types of brokers that could be viable.
The lookup could be as simple as this:
type Credentials interface {
LookupCredentials(gtw *models.Gateway) map[string]string
}
type SensediaCredentials struct {
customer_id string
customer_secret string
}
type MulesoftCredentials struct {
auth_url string
auth_secret string
scopes []string
}
func (s *SensediaCredentials) LookupCredentials(gtw *models.Gateway)
*SensediaCredentials {
return &SensediaCredentials{
customer_id: gtw.credentials.customer_id,
customer_secret: gtw.credentials.customer_secret,
}
}
func (m *MulesoftCredentials) LookupCredentials(gtw *models.Gateway)
*MulesoftCredentials {
return &MulesoftCredentials{
auth_url: gtw.credentials.auth_url,
auth_secret: gtw.credentials.auth_secret,
scopes: gtw.credentials.scopes,
}
}
Same thing goes with the connections, different sets of maps this time the key being the
connection type and the access level of the connection type. As we intend to make this
configurable it needs to be stored and not just hard coded on the global scope. As we’re
already doing a read and would need to do a lookup based on the type as well that
doesn’t really add much overhead other than the size of the queried data. If we consider
we’re already dealing with a small footprint that’s acceptable.
Using proper vectors with SQLite’s built-in support is a good approach as well, but that will
depend on the familiarity with their usage and how hard the ramp up should be, if given 23 of study and POCs and it’s in a workable state I would use this. Implementing Vector
Search if we go with this approach would speed it up by a lot.
Relationships
We’re following a the less complicated relationships the better road, so the only one-tomany relationship I see the need for is customer -> gateways, the other ones can be
avoided by using either vector or direct references.
Having to deal with a bunch of lookup tables would really slow down the development
process and up the cost of running this in the long haul. The system shouldn’t be
expanded much more than this.
Parser
Each provider should have their own parser, translating the gateways responses to a
default format for each base endpoint.
The default format for each endpoint should have their own documentation based on
covering the needs of the frontend without expanding much of its surface area, that being
said it should have query parameters to create new views and return different filters to
match the developer needs.
For example, the base response for the /apps endpoint shouldn’t return all of the extra
info or compromising data, but the developer could expand this by creating a module in
their intended parser package that would add a query param to toggle the kind of
response they want. This way we make the endpoints expandable without compromising
on the simplicity of the core needs.
This will be consumed in global endpoints with variable controllers, so accessing
/v1/api/apps will be the same endpoint every user will consume, the customer and
gateway being extracted by the headers. This allows for more ease of use and less
chance of exposing customers and gateways ids through URL sharing or consumption by
3rd parties.
Connections
There should be multiple connection types, with different levels of access following the
approach described above. The connections will determine what sort of a response the
user can get, our portals will mostly use the Portal connection type, which means they’ll
get the full response, then will have it converted by the parser.
There will be a connection type just for integrations like Zenvia’s where they’ll get access
to some of the endpoints already filtered and without being parsed by the Portal Parser.
To return the data parsed for the Portals, the user will have to pass in a valid portal id and
a connection type of Portal.
The different types of connections will be treated as different gateways. This might spur a
question of “Why?”, it’s mostly because of security, this way we can create specific and
traceable keys per integration. It’s easier to monitor and avoid leakages of data.
For example, let’s suppose one of our customers will allow a integration through our
broker to a 3rd party, this 3rd party will be allowed to fetch the APIs and apps but without
the developers data on it.
So the customer will configure a new gateway, with the same credentials as their main
gateway but with a different configuration of connection type, and will be able to configure
which endpoints that gateway can access and which fields will be returned, while allowing
for them to configure authorization headers for that specific type of connection.
This avoids having to generate multiple credentials on the provider, while also generating
a specific authorization for that integration.
Config UI
It needs to have a TEMPL based ui - maybe using superkit with HTMX to deal with
reactivity.
Download