Project Configuration
There are some enviroment variable configs of project in api/config/dev.env.js
Variable | Default | Description |
---|---|---|
API_BASE_URL | http://127.0.0.1 | API base url |
API_PORT | 3000 | API url port number |
THEME | carbon | Admin dashbord theme [ default, light, dark, carbon, teal |
LOGO | src/assets/logo.png |
Your logo image source path displayed in admin |
BRAND | flashboard | Your brand name displayed in admin |
forbidden_download | false | Forbidden download any data from admin dashboard |
disable_admin | false | disable admin dashboard and just use rest-full api only |
lang | en | You can add or change your language keywords in src/langs/langs.js |
rtl | false | RTL Support |
Datasource Config
LoopBack models connect to backend systems such as databases via data sources that provide create, retrieve, update, and delete (CRUD) functions. LoopBack also generalizes other backend services, such as REST APIs, SOAP web services, and storage services, and so on, as data sources.
Data sources are backed by connectors that implement the data exchange logic using database drivers or other client APIs. In general, applications don’t use connectors directly, rather they go through data sources using the DataSource and PersistedModel APIs.
Basic procedure
To connect a model to a data source, follow these steps:
- Use the data source generator to create a new data source.
$ slc loopback:datasource
? Enter the data-source name: mysql-corp
? Select the connector for mysql: MySQL (supported by StrongLoop)
Follow the prompts to name the datasource and select the connector to use. This adds the new data source to [datasources.json](https://loopback.io/doc/en/lb2/datasources.json.html)
.
- Edit
api/server/datasources.json
to add the necessary authentication credentials: typically hostname, username, password, and database name.
For example:
api/server/datasources.json
"mysql-corp": {
"name": "mysql-corp",
"connector": "mysql",
"host": "your-mysql-server.foo.com",
"user": "db-username",
"password": "db-password",
"database": "your-db-name"
}
For information on the properties that each connector supports, see documentation for the specific connector under Connectors reference.
- Install the corresponding connector as a dependency of your app with
npm
.
For example:
$ cd <your-app>/api
$ npm install --save loopback-connector-mysql
See Connectors reference. for the list of connectors.
- Use the model generator to create a model.
$ slc loopback:model
? Enter the model name: myModel
? Select the data-source to attach myModel to: mysql (mysql)
? Select model's base class: PersistedModel
? Expose myModel via the REST API? Yes
? Custom plural form (used to build REST URL):
Let's add some test2 properties now.
...
When prompted for the data source to attach to, select the one you just created.
Note: The model generator lists the memory connector, “no data source,” and data sources listed in [datasources.json](https://loopback.io/doc/en/lb2/datasources.json.html)
. That’s why you created the data source first in step 1.
You can also create models from an existing database; see Creating models for more information.
Connectors
Note: In addition to the connectors below that IBM/StrongLoop supports, community connectors developed and maintained by the LoopBack community enable you to connect to CouchDB, Neo4j, Elasticsearch, and many others. See Community connectors for more information.
The following table lists commercially-supported LoopBack connectors. For more information, see Database connectors and Non-database connectors.
Database connectors | *Run all command in api directory | |
---|---|---|
Connector | Module | Installation |
IBM Cloudant | loopback-connector-cloudant | npm install --save loopback-connector-cloudant |
IBM DashDB | loopback-connector-dashdb | npm install --save loopback-connector-dashdb |
IBM DB2 | loopback-connector-db2 | npm install --save loopback-connector-db2 |
IBM DB2 for z/OS | loopback-connector-db2z | npm install --save loopback-connector-db2z |
IBM Informix | loopback-connector-informix | npm install loopback-connector-informix --save |
Memory connector | Built in to LoopBack | Not required; suitable for development and debugging only. |
MongoDB | loopback-connector-mongodb | npm install --save loopback-connector-mongodb |
MySQL | loopback-connector-mysql | npm install --save loopback-connector-mysql |
Oracle | loopback-connector-oracle | npm install --save loopback-connector-oracle |
PostgreSQL | loopback-connector-postgresql | npm install --save loopback-connector-postgresql |
SQL Server | loopback-connector-mssql | npm install --save loopback-connector-mssql |
SQLite 3.x | loopback-connector-sqlite3 | npm install --save loopback-connector-sqlite3 |
Other connectors | ||
Email connector | Built in to LoopBack | Not required |
Push connector | loopback-component-push | npm install --save loopback-component-push |
Remote connector | loopback-connector-remote | npm install --save loopback-connector-remote |
REST | loopback-connector-rest | npm install --save loopback-connector-rest |
SOAP | loopback-connector-soap | npm install --save loopback-connector-soap |
Storage connector | loopback-component-storage | npm install --save loopback-component-storage |
Installing a connector
Run npm install --save <_connector-module_>
in your application api directory to add the dependency to package.json
;
for example, to install the Oracle database connector:
$ cd <your-app>
$ npm install --save loopback-connector-oracle
This command adds the following entry to package.json
:
package.json
... "dependencies": { "loopback-connector-oracle": "latest" } ...
Creating a data source
Use the data source generator to create a new data source:
$ slc loopback:datasource
Follow the prompts to add the desired data source.
You can also create a data source programmatically; see Advanced topics: data sources for more information.
Data source properties
Data source properties depend on the specific data source being used. However, data sources for database connectors (Oracle, MySQL, PostgreSQL, MongoDB, and so on) share a common set of properties, as described in the following table.
Property | Type | Description |
---|---|---|
connector | String | Connector name; for example:
|
database | String | Database name |
debug | Boolean | If true, turn on verbose mode to debug database queries and lifecycle. |
host | String | Database host name |
password | String | Password to connect to database |
port | Number | Database TCP port |
url | String | Combines and overrides host , port , user , password , and database properties.Only valid with MongoDB connector, PostgreSQL connector, and SQL Server connector. |
username | String | Username to connect to database |
Built-in model
Loopback provides useful built-in models for common use cases:
- Application model - contains metadata for a client application that has its own identity and associated configuration with the LoopBack server.
- User model - register and authenticate users of your app locally or against third-party services.
- Access control models - ACL, AccessToken, Scope, Role, and RoleMapping models for controlling access to applications, resources, and methods.
- Email model (see email connector) - send emails to your app users using SMTP or third-party services.
The built-in models (except for Email) extend PersistedModel, so they automatically have a full complement of create, update, and delete (CRUD) operations.
Note:
By default, only the User model is exposed over REST. To expose the other models, change the model’s public
property to true in server/model-config.json
. See Exposing models for more information. Use caution: exposing some of these models over public API may be a security risk.
User Model
The User model represents users of the application or API. The default model definition file is common/models/user.json in the LoopBack repository.
Important:
You must create your own custom model (named something other than “User,” for example “Customer” or “Client”) that extends the built-in User model rather than use the built-in User model directly. The built-in User model provides a great deal of commonly-used functionality that you can use via your custom model.
LoopBack does not support multiple models based on the User model in a single application. That is, you cannot have more than one model derived from the built-in User model in a single app.
For more information, see Managing users.
User Model Json:
Built-in user model file config exist in api/common/User.json
:
{ "name": "User", "properties": { "realm": { "type": "string" }, "username": { "type": "string", "defaultColumn":true }, "type": { "type": "string", "uiType": "select", "default": "user", "options": [ { "value": "admin", "label": "Administrator" }, { "value": "user", "label": "User" } ] }, "password": { "type": "string", "required": true }, "email": { "type": "string", "required": true, "defaultColumn":true }, "emailVerified": {"type":"boolean"}, "verificationToken": {"type":"string"} }, "options": { "caseSensitiveEmail": true }, "hidden": ["password", "verificationToken"], ....
As you see, easily you can customize your users properties and options model.
Data Storage Config
The LoopBack storage component makes it easy to upload and download files to cloud storage providers and the local (server) file system. It has Node.js and REST APIs for managing binary content in cloud providers, including:
- Amazon
- Rackspace
- Openstack
- Azure
You use the storage component like any other LoopBack data source such as a database. Like other data sources, it supports create, read, update, and delete (CRUD) operations with exactly the same LoopBack and REST APIs.
The storage component organizes content as containers and files. A container holds a collection of files, and each file belongs to one container.
- Container groups files, similar to a directory or folder. A container defines the namespace for objects and is uniquely identified by its name, typically within a user account. NOTE: A container cannot have child containers.
- File stores the data, such as a document or image. A file is always in one (and only one) container. Within a container, each file has a unique name. Files in different containers can have the same name.
Default Storage Component
In your initial flashboard project you can see default storage component in /api/server/datasources.json
you can see below json configs for storage component :
"container": { "name": "container", "connector": "loopback-component-storage", "provider": "filesystem", "maxFileSize": "1000048576", "root": "uploads" }
Creating a storage component data source
You can create a storage component data source either using the command-line tools and the /api/server/datasources.json
file or programmatically in JavaScript.
Using CLI and JSON
Create a new data source as follows:
$ slc loopback:datasource [?] Enter the data-source name: myfile [?] Select the connector for myfile: other [?] Enter the connector name without the loopback-connector- prefix: loopback-component-storage [?] Install storage (Y/n)
Then edit /api/server/datasources.json
and manually add the properties of the data source (properties other than “name” and “connector”.
For example:
"myfile": { "name": "myfile", "connector": "loopback-component-storage", "provider": "amazon", "key": "your amazon key", "keyId": "your amazon key id" }
Using JavaScript
You can also create a storage component data source programmatically with the loopback.createDataSource()
method, putting code in /api/server/server.js
. For example, using local file system storage:
/api/server/server.js
var ds = loopback.createDataSource({ connector: require('loopback-component-storage'), provider: 'filesystem', root: path.join(__dirname, 'storage') }); var container = ds.createModel('container');
Here’s another example, this time for Amazon:
/api/server/server.js
var ds = loopback.createDataSource({ connector: require('loopback-component-storage'), provider: 'amazon', key: 'your amazon key', keyId: 'your amazon key id' }); var container = ds.createModel('container'); app.model(container);
You can also put this code in the /api/server/boot
directory, as an exported function:
module.exports = function(app) { // code to set up data source as shown above };
Provider credentials
Each cloud storage provider requires different credentials to authenticate. Provide these credentials as properties of the JSON object argument to createDataSource()
, in addition to the connector
property, as shown in the following table.
Provider | Property | Example |
---|---|---|
Amazon | provider: ‘amazon’ | {
provider: ‘amazon’, |
key | Amazon key | |
keyId | Amazon key ID | |
Rackspace | provider: ‘rackspace’ | { provider: ‘rackspace’, username: ‘…’, apiKey: ‘…’ } |
username | Your username | |
apiKey | Your API key | |
Azure | provider: ‘azure’ | { provider: ‘azure’, storageAccount: ‘…’, storageAccessKey: ‘…’ } |
storageAccount | Name of your storage account | |
storageAccessKey | Access key for storage account | |
OpenStack | provider: ‘openstack’ | { provider: ‘openstack’, username: ‘…’, password: ‘…’, authUrl: ‘https://your-identity-service’ } |
username | Your username | |
password | Your password | |
authUrl | Your identity service | |
Local File System | provider: ‘filesystem’ | { provider: ‘filesystem’, root: ‘/tmp/storage’, maxFileSize: “10485760” } |
root | File path to storage root directory. |
API
Once you create a container, it will provide both a REST and Node API, as described in the following table. For details, see the complete API documentation.
Description | Container Model Method | REST URI |
---|---|---|
List all containers. | getContainers(cb) | GET |
/api/containers | ||
Get information about specified container. | getContainer(container, cb) | GET |
/api/containers/:container | ||
Create a new container. | createContainer(options, cb) | POST |
/api/containers | ||
Delete specified container. | destroyContainer(container, cb) | DELETE |
/api/containers/:container | ||
List all files within specified container. | getFiles(container, download, cb) | GET |
/api/containers/:container/files | ||
Get information for specified file within specified container. | getFile(container, file, cb) | GET |
/api/containers/:container/files/:file | ||
Delete a file within a given container by name. | removeFile(container, file, cb) | DELETE /api/containers/:container/files/:file |
Upload one or more files into the specified container. The request body must use multipart/form-data which the file input type for HTML uses. | upload(req, res, cb) | POST |
/api/containers/:container/upload | ||
Download a file within specified container. | download(container, file, res, cb) | GET |
/api/containers/:container/download/:file | ||
Get a stream for uploading. | uploadStream(container, file, options, cb) | |
Get a stream for downloading. | downloadStream(container, file, options, cb) |
Integration with google analytics
I integrated google analytics too admin dashboard of flashboard and you can easily set some configuration to appear your reports on admin dashboard like below :
http://138.68.46.113/flashboard
1. Login into your Google Analytics Account .
Under the Admin select your account, select View Settings, under basic settings you will see your View ID.
Define GOOGLE_VIEWID in env file and set your view id as first step.
/config/dev.env.js
2.Creating a service account .
- Open the Service accounts page. If prompted, select a project.
- Click Create service account.
- In the Create service account window, type a name for the service account, and select Furnish a new private key. If you want to grant G Suite domain-wide authority to the service account, also select Enable G Suite Domain-wide Delegation. Then click Create.
Your new public/private key pair is generated and downloaded to your machine; it serves as the only copy of this key. You are responsible for storing it securely.
After download your json file put it on your node_module folder as google.json
file.