Search Results

Search found 61580 results on 2464 pages for 'document based database'.

Page 605/2464 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • MySQLdb rowcount Returns Nothing

    - by Alec K.
    I am trying to log into my table called acounts using MySQLdb in Python, but it does not work for me. I keep getting my message "Not Logged In". Here is my code: database = MySQLdb.connect("127.0.0.1", "root", "pswd", "Kazzah") cursor = database.cursor() cursor.execute("SELECT * FROM Accounts WHERE Email=%s AND Password=%s", (_Email, _Password)) database.commit() numrows = cursor.rowcount if numrows == 1: msg = "Logged In" else: msg = "Not Logged In" cursor.close() database.close() What am I doing wrong? Thanks.

    Read the article

  • how Do I list all table names in SQL Server using T-SQL?

    - by shrimpy
    SELECT name FROM sys.databases -- this can list all database name in the server user database SELECT * FROM INFORMATION_SCHEMA.TABLES -- these two line can list the table for one particular database But how can I output the results like below? Database Table --------- ------------- db1 t1 db1 t2 db2 t1 ... ...

    Read the article

  • Installer package for program that uses JDBC to connect to MySQL....

    - by eli1987
    I have an installer wizard thing called 'install creator'. I want to include my mySQL database into the installer or find another way that the user, upon installation, can just use my database. Prob is-not everyone has MySQL installed on the computer and even then, the user doesn't know the name of the database or my password. Somehow the database must be created automatically upon install, and for my purposes, some of the tables created. How can one do this. Thanks

    Read the article

  • Can I use WCF to replace my current Web Service and Window Service combination?

    - by gun_shy
    I need a little bit of advise regarding the situation I am faced with. The current arrangement I have been tasked with improving just doesn't sit well with me and I feel like there is a better way to do it. The more I read about WCF, the more I get the feeling that it might be what I am looking for. Right now, I have an asp.net client, a .net web service, a windows service, a ms sql database, and a third party application that is used for processing a group of 'project' files into a finalized file. Since the third party application can only handle processing one 'project' at a time, the combination of the web service, window service, and database have been arranged to create a job queue manager for the third party application. The client sends a zip 'project' file containing multiple sub files to the web service. The web service adds a new 'project' line to the database, generating a unique job id. The zip file is expanded to a folder location on the server using the job id as the folder name. The web service then returns the job id to the client. The client will use this id to poll the web service for the status of the job submitted. When the job is complete, the client will request the processed file. The windows service polls the database every x minutes. If a new job exists, the service will pull the oldest job and send it to the third party app for processing. If the processing succeeds, the window service updates the project line in the database, marking the job complete. The window service will continue to process any non complete jobs in the database until there are no more. When it stops finding any jobs, it will sleep x minutes and then poll the database again. I do not like the fact that the window service has to poll the database. If there is only one job submitted, the client will have to wait for the window service to poll and then wait while the 'project' is being processed. It seems like WCF could be used to combine the web and window services using a combination of the InstanceContextMode.Single and ConcurrencyMode.Multiple. So far, I have been unable to find any articles or examples that would point me in the right direction. Can WCF be utilized to accomplish the job queue logic of the current arrangement in a better way? As always, any help is more than appreciated.

    Read the article

  • Drop all foreign keys in a table

    - by trnTash
    I had this script which worked in sql server 2005 -- t-sql scriptlet to drop all constraints on a table DECLARE @database nvarchar(50) DECLARE @table nvarchar(50) set @database = 'dotnetnuke' set @table = 'tabs' DECLARE @sql nvarchar(255) WHILE EXISTS(select * from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where constraint_catalog = @database and table_name = @table) BEGIN select @sql = 'ALTER TABLE ' + @table + ' DROP CONSTRAINT ' + CONSTRAINT_NAME from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where constraint_catalog = @database and table_name = @table exec sp_executesql @sql END It does not work in SQL Server 2008. How can I easily drop all foreign key constraints for a certain table? Does anyone have a better script?

    Read the article

  • [Android] Putting Serializable Classes into SQL?

    - by CaseyB
    Here's the situation. I have a bunch of objects that implement Serializable that I want to store in a SQL database. I have two questions Is there a way to serialize the object directly into the database Is that the best way to do it or should I Write the object out to a formatting String and put it in the database that way and then parse it back out Write each member to the database with a field that is unique to each object

    Read the article

  • How to Refresh combobox in Visual Basic

    - by phenomenon09
    When a button is clicked, it populates a combo box with all the databases you have created. Another button creates a new database. How do I refresh my combobox to add the newly added database? Here's how I populate my combo box at the start: rs.Open "show databases", conn While Not rs.EOF If rs!Database <> "information_schema" Then Combo1.AddItem rs!Database End If rs.MoveNext Wend cmdOK.Enabled = False cmdCancel.Enabled = False frmLogin.Height = 3300 rs.Close

    Read the article

  • PHP/MySQL Interview - How would you have answered?

    - by martincarlin87
    I was asked this interview question so thought I would post it here to see how other users would answer: Please write some code which connects to a MySQL database (any host/user/pass), retrieves the current date & time from the database, compares it to the current date & time on the local server (i.e. where the application is running), and reports on the difference. The reporting aspect should be a simple HTML page, so that in theory this script can be put on a web server, set to point to a particular database server, and it would tell us whether the two servers’ times are in sync (or close to being in sync). This is what I put: // Connect to database server $dbhost = 'localhost'; $dbuser = 'xxx'; $dbpass = 'xxx'; $dbname = 'xxx'; $conn = mysql_connect($dbhost, $dbuser, $dbpass) or die (mysql_error()); // Select database mysql_select_db($dbname) or die(mysql_error()); // Retrieve the current time from the database server $sql = 'SELECT NOW() AS db_server_time'; // Execute the query $result = mysql_query($sql) or die(mysql_error()); // Since query has now completed, get the time of the web server $php_server_time = date("Y-m-d h:m:s"); // Store query results in an array $row = mysql_fetch_array($result); // Retrieve time result from the array $db_server_time = $row['db_server_time']; echo $db_server_time . '<br />'; echo $php_server_time; if ($php_server_time != $db_server_time) { // Server times are not identical echo '<p>Database server and web server are not in sync!</p>'; // Convert the time stamps into seconds since 01/01/1970 $php_seconds = strtotime($php_server_time); $sql_seconds = strtotime($db_server_time); // Subtract smaller number from biggest number to avoid getting a negative result if ($php_seconds > $sql_seconds) { $time_difference = $php_seconds - $sql_seconds; } else { $time_difference = $sql_seconds - $php_seconds; } // convert the time difference in seconds to a formatted string displaying hours, minutes and seconds $nice_time_difference = gmdate("H:i:s", $time_difference); echo '<p>Time difference between the servers is ' . $nice_time_difference; } else { // Timestamps are exactly the same echo '<p>Database server and web server are in sync with each other!</p>'; } Yes, I know that I have used the deprecated mysql_* functions but that aside, how would you have answered, i.e. what changes would you make and why? Are there any factors I have omitted which I should take into consideration? The interesting thing is that my results always seem to be an exact number of minutes apart when executed on my hosting account: 2012-12-06 11:47:07 2012-12-06 11:12:07

    Read the article

  • Taking MySql backup from my Java application

    - by dhiraj
    I am developing a Java application with MySql as the database. I have to dump the MySql database from my application periodically(let say every day at 10 a.m.) and I have written a batch (.bat) file for dumping the database. The batch file is working fine, but the problem is that it is asking for password each time during its execution. Is there any way to dump MySql database without prompting for password and achieve it from Java application periodically?

    Read the article

  • Problem with user logins after db Restore

    - by JJgates
    I have two SQL 2005 instances that reside on different networks. I need to backup a database from instance A and restore it to a database in instance B on a weekly basis so that both databases hold the same data. After the restore, logins SIDS on database B are changed and therefore users can't log into database B and connection strings for the web application it supports are broken. Is there a work around for this? Thanks.

    Read the article

  • .NET DataSource Clarification

    - by Steven
    I'm fairly new to database programming in .NET. If I want to call several existing queries from the same database for different tasks, should I have one DataSource per database, per database connection, or per query?

    Read the article

  • PHP & MySQL on Mac OS X: Access denied for GUI user

    - by Eirik Lillebo
    Hey! This question was first posted to Stack Overflow, but as it is perhaps just as much a server issue I though it might be just as well to post it here also. I have just installed and configured Apache, MySQL, PHP and phpMyAdmin on my Macbook in order to have a local development environment. But after I moved one of my projects over to the local server I get a weird MySQL error from one of my calls to mysql_query(): Access denied for user '_securityagent'@'localhost' (using password: NO) First of all, the query I'm sending to MySQL is all valid, and I've even testet it through phpMyAdmin with perfect result. Secondly, the error message only happens here while I have at least 4 other mysql connections and queries per page. This call to mysql_query() happens at the end of a really long function that handles data for newly created or modified articles. This basically what it does: Collect all the data from article form (title, content, dates, etc..) Validate collected data Connect to database Dynamically build SQL query based on validated article data Send query to database before closing the connection Pretty basic, I know. I did not recognize the username "_securityagent" so after a quick search I came across this from and article at Apple's Developer Connection talking about some random bug: Mac OS X's security infrastructure gets around this problem by running its GUI code as a special user, "_securityagent". Then I tried put a var_dump() on all variables used in the mysql_connect() call, and every time it returns the correct values (where username is not "_securityagent" of course). Thus I'm wondering if anyone has any idea why 'securityagent' is trying to connect to my database - and how I can keep this error from occurring when I call mysql_query(). Update: Here is the exact code I'm using to connect to the database. But a little explanation must follow: The connection error happens at a call to mysql_query() in function X in class_1 class_1 uses class_2 to connect to database class_2 reads a config file with the database connection variables (host, user, pass, db) class_2 connect to the database through the following function: var $SYSTEM_DB_HOST = ""; function connect_db() { // Reads the config file include('system_config.php'); if (!($SYSTEM_DB_HOST == "")) { mysql_connect($SYSTEM_DB_HOST, $SYSTEM_DB_USER, $SYSTEM_DB_PASS); @mysql_select_db($SYSTEM_DB); return true; } else { return false; } }

    Read the article

  • I have to shard a mysql database. I want to start with 12 shards on 2 machines. What is the best w

    - by Tim
    All tables are InnoDb. I would rather not use mysqldump, because the shard sizes will be about 200 GB (about 700 million rows), and that will take too long. I was hoping to just stop mysql for an hour, copy the data files to a new machine, and start back up. But you can't do this with InnoDb, as some data is in the shared tablespace. Even if I have the innodb_file_per_table option set. This is not a website, but a custom application, used by tens of thousands right now, so uptime and performance are important. I suppose I could add logic into my server application to allow for gradual rebalancing / moving of a shard. Does anyone have a better idea?

    Read the article

  • MySQL performance over a (local) network much slower than I would expect

    - by user15241
    MySQL queries in my production environment are taking much longer than I would expect them too. The site in question is a fairly large Drupal site, with many modules installed. The webserver (Nginx) and database server (mysql) are hosted on separated machines, connected by a 100mbps LAN connection (hosted by Rackspace). I have the exact same site running on my laptop for development. Obviously, on my laptop, the webserver and database server are on the same box. Here are the results of my database query times: Production: Executed 291 queries in 320.33 milliseconds. (homepage) Executed 517 queries in 999.81 milliseconds. (content page) Development: Executed 316 queries in 46.28 milliseconds. (homepage) Executed 586 queries in 79.09 milliseconds. (content page) As can clearly be seen from these results, the time involved with querying the MySQL database is much shorter on my laptop, where the MySQL server is running on the same database as the web server. Why is this?! One factor must be the network latency. On average, a round trip from from the webserver to the database server takes 0.16ms (shown by ping). That must be added to every singe MySQL query. So, taking the content page example above, where there are 517 queries executed. Network latency alone will add 82ms to the total query time. However, that doesn't account for the difference I am seeing (79ms on my laptop vs 999ms on the production boxes). What other factors should I be looking at? I had thought about upgrading the NIC to a gigabit connection, but clearly there is something else involved. I have run the MySQL performance tuning script from http://www.day32.com/MySQL/ and it tells me that my database server is configured well (better than my laptop apparently). The only problem reported is "Of 4394 temp tables, 48% were created on disk". This is true in both environments and in the production environment I have even tried increasing max_heap_table_size and Current tmp_table_size to 1GB, with no change (I think this is because I have some BLOB and TEXT columns).

    Read the article

  • Java Cloud Service Integration to REST Service

    - by Jani Rautiainen
    Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance. This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance. In this article a custom application integrating with REST service will be implemented. We will use REST services provided by Taleo as an example; however the same approach will work with any REST service. In this example the data from the REST service is used to populate a dynamic table. Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration.Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source.The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the "Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud" link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guideFor details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Access to a local database The database associated with the JCS instance cannot be connected to with JDBC.  Since creating ADFbc business component requires a JDBC connection we will need access to a local database. 3rd party libraries This example will use some 3rd party libraries for implementing the REST service call and processing the input / output content. Other libraries may also be used, however these are tested to work. Jersey 1.x Jersey library will be used as a client to make the call to the REST service. JCS documentation for supported specifications states: Java API for RESTful Web Services (JAX-RS) 1.1 So Jersey 1.x will be used. Download the single-JAR Jersey bundle; in this example Jersey 1.18 JAR bundle is used. Json-simple Jjson-simple library will be used to process the json objects. Download the  JAR file; in this example json-simple-1.1.1.jar is used. Accessing data in Taleo Before implementing the application it is beneficial to familiarize oneself with the data in Taleo. Easiest way to do this is by using a RESTClient on your browser. Once added to the browser you can access the UI: The client can be used to call the REST services to test the URLs and data before adding them into the application. First derive the base URL for the service this can be done with: Method: GET URL: https://tbe.taleo.net/MANAGER/dispatcher/api/v1/serviceUrl/<company name> The response will contain the base URL to be used for the service calls for the company. Next obtain authentication token with: Method: POST URL: https://ch.tbe.taleo.net/CH07/ats/api/v1/login?orgCode=<company>&userName=<user name>&password=<password> The response includes an authentication token that can be used for few hours to authenticate with the service: {   "response": {     "authToken": "webapi26419680747505890557"   },   "status": {     "detail": {},     "success": true   } } To authenticate the service calls navigate to "Headers -> Custom Header": And add a new request header with: Name: Cookie Value: authToken=webapi26419680747505890557 Once authentication token is defined the tool can be used to invoke REST services; for example: Method: GET URL: https://ch.tbe.taleo.net/CH07/ats/api/v1/object/candidate/search.xml?status=16 This data will be used on the application to be created. For details on the Taleo REST services refer to the Taleo Business Edition REST API Guide. Create Application First Fusion Web Application is created and configured. Start JDeveloper and click "New Application": Application Name: JcsRestDemo Application Package Prefix: oracle.apps.jcs.test Application Template: Fusion Web Application (ADF) Configure Local Cloud Connection Follow the steps documented in the "Java Cloud Service ADF Web Application" article to configure a local database connection needed to create the ADFbc objects. Configure Libraries Add the 3rd party libraries into the class path. Create the following directory and copy the jar files into it: <JDEV_USER_HOME>/JcsRestDemo/lib  Select the "Model" project, navigate "Application -> Project Properties -> Libraries and Classpath -> Add JAR / Directory" and add the 2 3rd party libraries: Accessing Data from Taleo To access data from Taleo using the REST service the 3rd party libraries will be used. 2 Java classes are implemented, one representing the Candidate object and another for accessing the Taleo repository Candidate Candidate object is a POJO object used to represent the candidate data obtained from the Taleo repository. The data obtained will be used to populate the ADFbc object used to display the data on the UI. The candidate object contains simply the variables we obtain using the REST services and the getters / setters for them: Navigate "New -> General -> Java -> Java Class", enter "Candidate" as the name and create it in the package "oracle.apps.jcs.test.model".  Copy / paste the following as the content: import oracle.jbo.domain.Number; public class Candidate { private Number candId; private String firstName; private String lastName; public Candidate() { super(); } public Candidate(Number candId, String firstName, String lastName) { super(); this.candId = candId; this.firstName = firstName; this.lastName = lastName; } public void setCandId(Number candId) { this.candId = candId; } public Number getCandId() { return candId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Taleo Repository Taleo repository class will interact with the Taleo REST services. The logic will query data from Taleo and populate Candidate objects with the data. The Candidate object will then be used to populate the ADFbc object used to display data on the UI. Navigate "New -> General -> Java -> Java Class", enter "TaleoRepository" as the name and create it in the package "oracle.apps.jcs.test.model".  Copy / paste the following as the content (for details of the implementation refer to the documentation in the code): import com.sun.jersey.api.client.Client; import com.sun.jersey.api.client.ClientResponse; import com.sun.jersey.api.client.WebResource; import com.sun.jersey.core.util.MultivaluedMapImpl; import java.io.StringReader; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import oracle.jbo.domain.Number; import org.json.simple.JSONArray; import org.json.simple.JSONObject; import org.json.simple.parser.JSONParser; /** * This class interacts with the Taleo REST services */ public class TaleoRepository { /** * Connection information needed to access the Taleo services */ String _company = null; String _userName = null; String _password = null; /** * Jersey client used to access the REST services */ Client _client = null; /** * Parser for processing the JSON objects used as * input / output for the services */ JSONParser _parser = null; /** * The base url for constructing the REST URLs. This is obtained * from Taleo with a service call */ String _baseUrl = null; /** * Authentication token obtained from Taleo using a service call. * The token can be used to authenticate on subsequent * service calls. The token will expire in 4 hours */ String _authToken = null; /** * Static url that can be used to obtain the url used to construct * service calls for a given company */ private static String _taleoUrl = "https://tbe.taleo.net/MANAGER/dispatcher/api/v1/serviceUrl/"; /** * Default constructor for the repository * Authentication details are passed as parameters and used to generate * authentication token. Note that each service call will * generate its own token. This is done to avoid dealing with the expiry * of the token. Also only 20 tokens are allowed per user simultaneously. * So instead for each call there is login / logout. * * @param company the company for which the service calls are made * @param userName the user name to authenticate with * @param password the password to authenticate with. */ public TaleoRepository(String company, String userName, String password) { super(); _company = company; _userName = userName; _password = password; _client = Client.create(); _parser = new JSONParser(); _baseUrl = getBaseUrl(); } /** * This obtains the base url for a company to be used * to construct the urls for service calls * @return base url for the service calls */ private String getBaseUrl() { String result = null; if (null != _baseUrl) { result = _baseUrl; } else { try { String company = _company; WebResource resource = _client.resource(_taleoUrl + company); ClientResponse response = resource.type(MediaType.APPLICATION_FORM_URLENCODED_TYPE).get(ClientResponse.class); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); JSONObject jsonResponse = (JSONObject)jsonObject.get("response"); result = (String)jsonResponse.get("URL"); } catch (Exception ex) { ex.printStackTrace(); } } return result; } /** * Generates authentication token, that can be used to authenticate on * subsequent service calls. Note that each service call will * generate its own token. This is done to avoid dealing with the expiry * of the token. Also only 20 tokens are allowed per user simultaneously. * So instead for each call there is login / logout. * @return authentication token that can be used to authenticate on * subsequent service calls */ private String login() { String result = null; try { MultivaluedMap<String, String> formData = new MultivaluedMapImpl(); formData.add("orgCode", _company); formData.add("userName", _userName); formData.add("password", _password); WebResource resource = _client.resource(_baseUrl + "login"); ClientResponse response = resource.type(MediaType.APPLICATION_FORM_URLENCODED_TYPE).post(ClientResponse.class, formData); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); JSONObject jsonResponse = (JSONObject)jsonObject.get("response"); result = (String)jsonResponse.get("authToken"); } catch (Exception ex) { throw new RuntimeException("Unable to login ", ex); } if (null == result) throw new RuntimeException("Unable to login "); return result; } /** * Releases a authentication token. Each call to login must be followed * by call to logout after the processing is done. This is required as * the tokens are limited to 20 per user and if not released the tokens * will only expire after 4 hours. * @param authToken */ private void logout(String authToken) { WebResource resource = _client.resource(_baseUrl + "logout"); resource.header("cookie", "authToken=" + authToken).post(ClientResponse.class); } /** * This method is used to obtain a list of candidates using a REST * service call. At this example the query is hard coded to query * based on status. The url constructed to access the service is: * <_baseUrl>/object/candidate/search.xml?status=16 * @return List of candidates obtained with the service call */ public List<Candidate> getCandidates() { List<Candidate> result = new ArrayList<Candidate>(); try { // First login, note that in finally block we must have logout _authToken = "authToken=" + login(); /** * Construct the URL, the resulting url will be: * <_baseUrl>/object/candidate/search.xml?status=16 */ MultivaluedMap<String, String> formData = new MultivaluedMapImpl(); formData.add("status", "16"); JSONArray searchResults = (JSONArray)getTaleoResource("object/candidate/search", "searchResults", formData); /** * Process the results, the resulting JSON object is something like * this (simplified for readability): * * { * "response": * { * "searchResults": * [ * { * "candidate": * { * "candId": 211, * "firstName": "Mary", * "lastName": "Stochi", * logic here will find the candidate object(s), obtain the desired * data from them, construct a Candidate object based on the data * and add it to the results. */ for (Object object : searchResults) { JSONObject temp = (JSONObject)object; JSONObject candidate = (JSONObject)findObject(temp, "candidate"); Long candIdTemp = (Long)candidate.get("candId"); Number candId = (null == candIdTemp ? null : new Number(candIdTemp)); String firstName = (String)candidate.get("firstName"); String lastName = (String)candidate.get("lastName"); result.add(new Candidate(candId, firstName, lastName)); } } catch (Exception ex) { ex.printStackTrace(); } finally { if (null != _authToken) logout(_authToken); } return result; } /** * Convenience method to construct url for the service call, invoke the * service and obtain a resource from the response * @param path the path for the service to be invoked. This is combined * with the base url to construct a url for the service * @param resource the key for the object in the response that will be * obtained * @param parameters any parameters used for the service call. The call * is slightly different depending whether parameters exist or not. * @return the resource from the response for the service call */ private Object getTaleoResource(String path, String resource, MultivaluedMap<String, String> parameters) { Object result = null; try { WebResource webResource = _client.resource(_baseUrl + path); ClientResponse response = null; if (null == parameters) response = webResource.header("cookie", _authToken).get(ClientResponse.class); else response = webResource.queryParams(parameters).header("cookie", _authToken).get(ClientResponse.class); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); result = findObject(jsonObject, resource); } catch (Exception ex) { ex.printStackTrace(); } return result; } /** * Convenience method to recursively find a object with an key * traversing down from a given root object. This will traverse a * JSONObject / JSONArray recursively to find a matching key, if found * the object with the key is returned. * @param root root object which contains the key searched for * @param key the key for the object to search for * @return the object matching the key */ private Object findObject(Object root, String key) { Object result = null; if (root instanceof JSONObject) { JSONObject rootJSON = (JSONObject)root; if (rootJSON.containsKey(key)) { result = rootJSON.get(key); } else { Iterator children = rootJSON.entrySet().iterator(); while (children.hasNext()) { Map.Entry entry = (Map.Entry)children.next(); Object child = entry.getValue(); if (child instanceof JSONObject || child instanceof JSONArray) { result = findObject(child, key); if (null != result) break; } } } } else if (root instanceof JSONArray) { JSONArray rootJSON = (JSONArray)root; for (Object child : rootJSON) { if (child instanceof JSONObject || child instanceof JSONArray) { result = findObject(child, key); if (null != result) break; } } } return result; } }   Creating Business Objects While JCS application can be created without a local database, the local database is required when using ADFbc objects even if database objects are not referred. For this example we will create a "Transient" view object that will be programmatically populated based the data obtained from Taleo REST services. Creating ADFbc objects Choose the "Model" project and navigate "New -> Business Tier : ADF Business Components : View Object". On the "Initialize Business Components Project" choose the local database connection created in previous step. On Step 1 enter "JcsRestDemoVO" on the "Name" and choose "Rows populated programmatically, not based on query": On step 2 create the following attributes: CandId Type: Number Updatable: Always Key Attribute: checked Name Type: String Updatable: Always On steps 3 and 4 accept defaults and click "Next".  On step 5 check the "Application Module" checkbox and enter "JcsRestDemoAM" as the name: Click "Finish" to generate the objects. Populating the VO To display the data on the UI the "transient VO" is populated programmatically based on the data obtained from the Taleo REST services. Open the "JcsRestDemoVOImpl.java". Copy / paste the following as the content (for details of the implementation refer to the documentation in the code): import java.sql.ResultSet; import java.util.List; import java.util.ListIterator; import oracle.jbo.server.ViewObjectImpl; import oracle.jbo.server.ViewRowImpl; import oracle.jbo.server.ViewRowSetImpl; // --------------------------------------------------------------------- // --- File generated by Oracle ADF Business Components Design Time. // --- Tue Feb 18 09:40:25 PST 2014 // --- Custom code may be added to this class. // --- Warning: Do not modify method signatures of generated methods. // --------------------------------------------------------------------- public class JcsRestDemoVOImpl extends ViewObjectImpl { /** * This is the default constructor (do not remove). */ public JcsRestDemoVOImpl() { } @Override public void executeQuery() { /** * For some reason we need to reset everything, otherwise * 2nd entry to the UI screen may fail with * "java.util.NoSuchElementException" in createRowFromResultSet * call to "candidates.next()". I am not sure why this is happening * as the Iterator is new and "hasNext" is true at the point * of the execution. My theory is that since the iterator object is * exactly the same the VO cache somehow reuses the iterator including * the pointer that has already exhausted the iterable elements on the * previous run. Working around the issue * here by cleaning out everything on the VO every time before query * is executed on the VO. */ getViewDef().setQuery(null); getViewDef().setSelectClause(null); setQuery(null); this.reset(); this.clearCache(); super.executeQuery(); } /** * executeQueryForCollection - overridden for custom java data source support. */ protected void executeQueryForCollection(Object qc, Object[] params, int noUserParams) { /** * Integrate with the Taleo REST services using TaleoRepository class. * A list of candidates matching a hard coded query is obtained. */ TaleoRepository repository = new TaleoRepository(<company>, <username>, <password>); List<Candidate> candidates = repository.getCandidates(); /** * Store iterator for the candidates as user data on the collection. * This will be used in createRowFromResultSet to create rows based on * the custom iterator. */ ListIterator<Candidate> candidatescIterator = candidates.listIterator(); setUserDataForCollection(qc, candidatescIterator); super.executeQueryForCollection(qc, params, noUserParams); } /** * hasNextForCollection - overridden for custom java data source support. */ protected boolean hasNextForCollection(Object qc) { boolean result = false; /** * Determines whether there are candidates for which to create a row */ ListIterator<Candidate> candidates = (ListIterator<Candidate>)getUserDataForCollection(qc); result = candidates.hasNext(); /** * If all candidates to be created indicate that processing is done */ if (!result) { setFetchCompleteForCollection(qc, true); } return result; } /** * createRowFromResultSet - overridden for custom java data source support. */ protected ViewRowImpl createRowFromResultSet(Object qc, ResultSet resultSet) { /** * Obtain the next candidate from the collection and create a row * for it. */ ListIterator<Candidate> candidates = (ListIterator<Candidate>)getUserDataForCollection(qc); ViewRowImpl row = createNewRowForCollection(qc); try { Candidate candidate = candidates.next(); row.setAttribute("CandId", candidate.getCandId()); row.setAttribute("Name", candidate.getFirstName() + " " + candidate.getLastName()); } catch (Exception e) { e.printStackTrace(); } return row; } /** * getQueryHitCount - overridden for custom java data source support. */ public long getQueryHitCount(ViewRowSetImpl viewRowSet) { /** * For this example this is not implemented rather we always return 0. */ return 0; } } Creating UI Choose the "ViewController" project and navigate "New -> Web Tier : JSF : JSF Page". On the "Create JSF Page" enter "JcsRestDemo" as name and ensure that the "Create as XML document (*.jspx)" is checked.  Open "JcsRestDemo.jspx" and navigate to "Data Controls -> JcsRestDemoAMDataControl -> JcsRestDemoVO1" and drag & drop the VO to the "<af:form> " as a "ADF Read-only Table": Accept the defaults in "Edit Table Columns". To execute the query navigate to to "Data Controls -> JcsRestDemoAMDataControl -> JcsRestDemoVO1 -> Operations -> Execute" and drag & drop the operation to the "<af:form> " as a "Button": Deploying to JCS Follow the same steps as documented in previous article"Java Cloud Service ADF Web Application". Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsRestDemo-ViewController-context-root/faces/JcsRestDemo.jspx The UI displays a list of candidates obtained from the Taleo REST Services: Summary In this article we learned how to integrate with REST services using Jersey library in JCS. In future articles various other integration techniques will be covered.

    Read the article

  • How to let users post links/images to Facebook, Twitter, Buzz etc... from a Rails based website?

    - by wgpubs
    I'd like to offer users the ability to post images / links to articles from my web application to Facebook, Twitter, Buzz and any other social network. A perfect example of the functionality I'm trying to replicate is mashable.com ... where each social network is represented by an icon that a) shows the number of shares AND b) allows users to click on it to post to that specific network. Don't know if it matters ... but the site is built using RoR. Thanks

    Read the article

  • postfix with mailman

    - by Thufir
    What should happen is that [email protected] should be delivered to that users inbox on localhost, user@localhost. Thunderbird works fine at reading user@localhost. I'm just using a small portion of postfix-dovecot with Ubuntu mailman. How can I get postfix to recognize the FQDN and deliver them to a localhost inbox? root@dur:~# root@dur:~# tail /var/log/mail.err;tail /var/log/mailman/subscribe;postconf -n Aug 27 18:59:16 dur dovecot: lda(root): Error: chdir(/root) failed: Permission denied Aug 27 18:59:16 dur dovecot: lda(root): Error: user root: Initialization failed: Initializing mail storage from mail_location setting failed: stat(/root/Maildir) failed: Permission denied (euid=65534(nobody) egid=65534(nogroup) missing +x perm: /root, dir owned by 0:0 mode=0700) Aug 27 18:59:16 dur dovecot: lda(root): Fatal: Invalid user settings. Refer to server log for more information. Aug 27 20:09:16 dur postfix/trivial-rewrite[15896]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 21:19:17 dur postfix/trivial-rewrite[16569]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 22:27:00 dur postfix[17042]: fatal: usage: postfix [-c config_dir] [-Dv] command Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 22:59:07 dur postfix/postfix-script[17459]: error: unknown command: 'restart' Aug 27 22:59:07 dur postfix/postfix-script[17460]: fatal: usage: postfix start (or stop, reload, abort, flush, check, status, set-permissions, upgrade-configuration) Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 21:39:03 2012 (16734) cola: pending "[email protected]" <[email protected]> 127.0.0.1 Aug 27 21:40:37 2012 (16749) cola: pending "[email protected]" <[email protected]> 127.0.0.1 Aug 27 22:45:31 2012 (17288) gmane.mail.mailman.user.1: pending [email protected] 127.0.0.1 Aug 27 22:45:46 2012 (17293) gmane.mail.mailman.user.1: pending [email protected] 127.0.0.1 Aug 27 23:02:01 2012 (17588) test3: pending [email protected] 127.0.0.1 Aug 27 23:05:41 2012 (17652) test4: pending [email protected] 127.0.0.1 Aug 27 23:56:20 2012 (17985) test5: pending [email protected] 127.0.0.1 alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# there's definitely a transport problem: root@dur:~# root@dur:~# root@dur:~# grep transport /var/log/mail.log | tail Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: warning: transport_maps lookup failure Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "*" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "*" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: transport_maps lookup failure root@dur:~# trying to add the transport file: EDIT root@dur:~# root@dur:~# touch /etc/postfix/transport root@dur:~# ll /etc/postfix/transport -rw-r--r-- 1 root root 0 Aug 28 00:16 /etc/postfix/transport root@dur:~# root@dur:~# cd /etc/postfix/ root@dur:/etc/postfix# root@dur:/etc/postfix# postmap transport root@dur:/etc/postfix# root@dur:/etc/postfix# cat transport

    Read the article

  • SQL Server Issue: Could not allocate space for object ... primary filegroup is full

    - by Luke
    Trying to figure out a problem at an office that has SQL Server 2005 installed on Windows SBS Server 2008. Here's the setup: It's an office, and the person who set this all up is nowhere to be found. I'm the best hope they have... One of the programs they use on a workstation gives them an error of "Could not allocate space for object 'Billing' in database "MyDatabase" because primary filegroup is full" when trying to save an entry in their software. I searched around for hours, looking for possible solutions. One was to check for available disk space, and another was to defrag. I checked the hard drives on the server, and there is plenty of space free. I also defragged, which may have helped the problem somewhat. It's hard to say, because it seems like with the nature of the error, if you try over and over you might get it to actually save. My next step was to try to see if autogrowth was enabled on the database. This would seem to be a likely / possible solution, but I can't access the database! If I run the SQL Management Studio, I can log in as my Windows user and view the list of databases. However, if I try to do anything (actually view the database, view the properties, add or edit users), I get errors that I don't have permission. For what it's worth, I also tried runing Management Studio as Administrator, in case that would help. No difference, though. Now, what I'm guessing is going on -- from my limited knowledge of SQL and from reading online -- is that though I'm logged in as a Windows administrator, that account does NOT have SQL access. I do see a list of SQL users, including SA, but I again don't have permission to add one or to change the password on an existing one. And nobody at the office has any idea what the SQL passwords could be. So... here's my thinking thus far: 1 - The "Could not allocate" error likely points to a database that needs to be allowed to autogrow. Especially since I verified there is plenty of free space and the HD has been defragmented. 2 - Enabling autogrow would be very easy to do if I had the proper access within SQL Management Stuido. That leads me to this link: http://blogs.technet.com/b/sqlman/archive/2011/06/14/tips-amp-tricks-you-have-lost-access-to-sql-server-now-what.aspx It sounds like it's a step-by-step guide for giving me the access I need to SQL. I'm guessing that if I followed this guide, I would be able to then log in to the SQL server via Management Studio with the proper permissions, and would be able to enable autogrow (or simply view the status of the existing database), and hopefully solve the "Could not allocate space" problem! So I guess I have a few questions: 1 - Would you guys agree with my "diagnosis"? Think I'm barking up the right tree? 2 - Is there any risk at all in hurting / disabling / wrecking the current SQL database or setup with me going through the guide to regain SQL access? I understand that per the guide, I would have to temporarily shut down SQL, so obviously it wouldn't be accessible during that time. But it wouldn't be worth the risk if there's a chance I could mess anything up... Like I said, the workstations ARE currently accessing the database somehow, but nobody knows with what login info or anything. Basically, it's set up, it works (usually), but if they had to reload the software, nobody would know how. Any feedback would be appreciated!! The problem is such that it's not an emergency for them, but an annoyance. If I could fix it, it would be wonderful. But if not, I think they'll manage, especially as they are going to eventually stop using this software. Thank you so much for your time! Luke

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • On automating a split-mirror ASM backup with EMC TimeFinder ...

    - by [email protected]
    Normal 0 21 false false false MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Hi clerks,   Offloading the backup operation to another host using disk cloning could really improve the performance on highly busy databases ( 24x7, zero downtime and all this stuff ...) There are well know white papers on this subject, ASM included, but today Im showing you a nice way to automate the procedure using shell scripting with EMC TimeFinder technologies:   Assumptions: *********** ASM diskgroups name:   +data_${db_name} : asm data diskgroup +fra_${db_name} :  asm fra  diskgroup   EMC Time Finder sync groups name:   rac_${DB_NAME}_data_tf : data group rac_${DB_NAME}_fra_tf:   fra group     There are two scripts, one located on the production box ( bck_database.sh ) and the other one on the backup server node ( bck_database_mirror.sh ) The second one is remotly executed from the production host There are a bunch of variables along the code with selfexplanatory names I guess, anyway let me know if you want some help     #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    NAME ###     bck_database.sh ### ###    DESCRIPTION ###     Database backup on third mirror ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###    Oracle            28/01/10             - Creacion ###   V_DATE=`/bin/date +%Y%m%d_%H%M%S` V_FICH_LOG=`dirname $0`/trace_dir_location/`basename $0`.${V_DATE}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1     ADMIN_DIR=`dirname $0` . ${ADMIN_DIR}/setenv_instance.sh -- This script should set the instance vars like Oracle Home, Sid, db_name ... if [ $? -ne 0 ] then   echo "Error when setting the environment."   exit 1 fi   echo "${V_DATE} ####################################################" echo "Executing database backup: ${DB_NAME}" echo "####################################################################"   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm data diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm data diskgroups"   exit 2 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm data disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm data diskgroups"   exit 3 fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm fra diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm fra diskgroups"   exit 4 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm fra disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm fra diskgroups"   exit 5 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "ASM sync sucessfully completed!" echo "####################################################################"     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to BEGIN BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter system archive log current;   alter database ${DB_NAME} begin backup; ! if [ $? -ne 0 ] then   echo "Error when updating database status to BEGIN backup"   exit 6 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm data disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm data disks"   exit 7 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to END BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter database ${DB_NAME} end backup;   alter system archive log current; ! if [ $? -ne 0 ] then   echo "Error when updating database status to END backup"   exit 8 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Generating controlfile copies...." echo "####################################################################" rman<<-! connect target / run { allocate channel ch1 type DISK; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_mount.ctl'; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl'; } ! if [ $? -ne 0 ] then   echo "Error generating controlfile copies"   exit 9 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Resync RMAN catalog ....." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} resync catalog; ! if [ $? -ne 0 ] then   echo "Error when resyncing RMAN catalog"   exit 10 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm fra disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm fra disks"   exit 11 fi     echo "WARNING!: Calling bck_database_mirror.sh host ${NODE_BCK_SERVER}..." ssh ${NODO_BCK_SERVER} ${ADMIN_DIR_BCK}/bck_database_mirror.sh if [ $? -ne 0 ] then   echo "Error, when remote executing the backup "   exit 12 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Cleaning the archived redo logs already copied to tape ..." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} run { resync catalog; delete noprompt archivelog all backed up 1 times to device type sbt; } ! if [ $? -ne 0 ] then   echo "Error when cleaning the archived redo logs"   exit 13 fi echo "${V_DATE} ####################################################" echo "Backup sucessfully executed!!" echo "####################################################################" exit 0   ------------------------------------------------------------------------------ ------------------------** BACKUP SERVER NODE ** ----------------------------- ------------------------------------------------------------------------------   #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    ###    NAME ###     bck_database_mirror.sh ### ###    DESCRIPTION ###      Backup @ backup server ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###      Oracle                    28/01/10     - Creacion         V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ASM instance ..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_asm.sh -- This script is supposed to start the ASM instance in the backup server   if [ $? -ne 0 ]   then     echo "Error when tying to start ASM instance."     exit 1   fi       . ${V_ADMIN_DIR}/setenv_asm.sh -- This script is supposed to set the env. variables of the ASM instance   if [ $? -ne 0 ]   then     echo "Error when setting the ASM environment"     exit 1   fi       V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "The asm diskgroups/disks dettected are the following ..."   echo "####################################################################"     sqlplus /nolog <<-!     whenever sqlerror exit 1     connect / as sysdba     whenever sqlerror exit     SET LINES 200     COL PATH FORMAT A25     SELECT DISK.MOUNT_STATUS, DISK.PATH, DISK.NAME, DISK_GROUP.NAME, DISK_GROUP.TOTAL_MB FROM V\$ASM_DISK DISK, V\$ASM_DISKGROUP DISK_GROUP WHERE DISK.GROUP_NUMBER=DISK_GROUP.GROUP_NUMBER; !       V_ADMIN_DIR=`dirname $0`   . ${V_ADMIN_DIR}/setenv_instance.sh -- This script is supposed to set the env. variables of the database instance   if [ $? -ne 0 ]   then     echo "Error when setting the database instance environment"     exit 1   fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ${DB_NAME} in MOUNT mode..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_instance_mount.sh -- This script is supposed to do a startup mount   if [ $? -ne 0 ]   then     echo "Error starting  ${DB_NAME} in MOUNT mode"     exit 1   fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Executing RMAN backup..."   echo "####################################################################"   rman<<-!   connect target /   connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG}   run {   allocate channel ch1 type 'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; -- TDPO Media Library   crosscheck archivelog all;   backup tag BCK_CONTROLFILE_ST_${DB_NAME}   format 'ctl_%d_%s__%p_%t'   controlfilecopy '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl';   backup tag BCK_DATAFILE_ST_${DB_NAME} full   format 'db_%d_%s_%p_%t'database;   backup tag BCK_ARCHLOG_ST_${DB_NAME} format 'al_%d_%s_%p_%t' archivelog all;   release channel ch1;   } !   if [ $? -ne 0 ]   then     echo "Error executing the RMAN backup"     exit 1   fi     ${V_ADMIN_DIR}/stop_instance_immediate.sh -- This script is supposed to do a shutdown immediate of the database instance   ${ADMIN_DIR}/stop_asm_immediate.sh -- This script is supposed to do a shutdown immediate of the ASM instance   exit 0     fi   Hope it helps someone! --L

    Read the article

  • SOA Suite Integration: Part 1: Building a Web Service

    - by Anthony Shorten
    Over the next few weeks I will be posting blog entries outlying the SOA Suite integration of the Oracle Utilities Application Framework. This will illustrate how easy it is to integrate by providing some samples. I will use a consistent set of features as examples. The examples will be simple and while will not illustrate ALL the possibilities it will illustrate the relative ease of integration. Think of them as a foundation. You can obviously build upon them. Now, to ease a few customers minds, this series will certainly feature the latest version of SOA Suite and the latest version of Oracle Utilities Application Framework but the principles will apply to past versions of both those products. So if you have Oracle SOA Suite 10g or are a customer of Oracle Utilities Application Framework V2.1 or above most of what I will show you will work with those versions. It is just easier in Oracle SOA Suite 11g and Oracle Utilities Application Framework V4.x. This first posting will not feature SOA Suite at all but concentrate on the capability for the Oracle Utilities Application Framework to create Web Services you can use for integration. The XML Application Integration (XAI) component of the Oracle Utilities Application Framework allows product objects to be exposed as XML based transactions or as Web Services (or both). XAI was written before Web Services became fashionable and has allowed customers of our products to provide a consistent interface into and out of our product line. XAI has been enhanced over the last few years to take advantages of the maturing landscape of Web Services in the market place to a point where it now easier to integrate to SOA infrastructure. There are a number of object types that can be exposed as Web Services: Maintenance Objects – These are the lowest level objects that can be exposed as Web Services. Customers of past versions of the product will be familiar with XAI services based upon Maintenance Objects as they used to be the only method of generating Web Services. These are still supported for background compatibility but are starting to become less popular as they were strict in their structure and were solely attribute based. To generate Maintenance Object based Web Services definition you need to use the XAI Schema Editor component. Business Objects – In Oracle Utilities Application Framework V2.1 we introduced the concept of Business Objects. These are site or industry specific objects that are based upon Maintenance Objects. These allow sites to respecify, in configuration, the structure and elements of a Maintenance Object and other Business Objects (they are true objects with support for inheritance, polymorphism, encapsulation etc.). These can be exposed as Web Services. Business Services – As with Business Objects, we introduced Business Services in Oracle Utilities Application Framework V2.1 which allowed applications services and query zones to be expressed as custom services. These can then be exposed as Web Services via the Business Service definition. Service Scripts - As with Business Objects and Business Services, we introduced Service Scripts in Oracle Utilities Application Framework V2.1. These allow services and/objects to be combined into complex objects or simply expose common routines as callable scripts. These can also be defined as Web Services. For the purpose of this series we will restrict ourselves to Business Objects. The techniques can apply to any of the objects discussed above. Now, lets get to the important bit of this blog post, the creation of a Web Service. To build a Business Object, you first logon to the product and navigate to the Administration Menu by selecting the Admin Menu from the Menu action on left top of the screen (next to Home). A popup menu will appear with the menu’s available. If you do not see the Admin menu then you do not have authority to use it. Here is an example: Navigate to the B menu and select the + symbol next to the Business Object menu item. This indicates that you want to ADD a new Business Object. This menu will appear if you are running Alphabetic mode in your installation (I almost forgot that point). You will be presented with the Business Object maintenance screen. You will fill out the following on the first tab (at a minimum): Business Object – The name of the Business Object. Typically you will make it descriptive and also prefix with CM to denote it as a customization (you can easily find it if you prefix it). As I running this on my personal copy of the product I will use my initials as the prefix and call the sample Web Service “AS-User”. Description – A short description of the object to tell others what it is used for. For my example, I will use “Anthony Shorten’s User Object”. Detailed Description – You can add a long description to help other developers understand your object. I am just going to specify “Anthony Shorten’s Test Object for SOA Suite Integration”. Maintenance Object – As this Business Service is going to be based upon a Maintenance Object I will specify the desired Maintenance Object. In this example, I have decided to use the Framework object USER. Now, I chose this for a number of reasons. It is meaningful, simple and is across all our product lines. I could choose ANY maintenance object I wished to expose (including any custom ones, if I had them). Parent Business Object – If I was not using a Maintenance Object but building a child Business Object against another Business Object, then I would specify the Parent Business Object here. I am not using Parent’s so I will leave this blank. You either use Parent Business Object or Maintenance Object not both. Application Service – Business Objects like other objects are subject to security. You can attach an Application Service to an object to specify which groups of users (remember services are attached to user groups not users) have appropriate access to the object. I will use a default service provided with the product, F1-DFLTS ,as this is just a demonstration so I do not have to be too sophisticated about security. Instance Control – This allows the object to create instances in its objects. You can specify a Business Object purely to hold rules. I am being simple here so I will set it to Allow New Instances to allow the Business Object to be used to create, read, update and delete user records. The rest of the tab I will leave empty as I want this to be a very simple object. Other options allow lots of flexibility. The contents should look like this: Before saving your work, you need to navigate to the Schema tab and specify the contents of your object. I will save some time. When you create an object the schema will only contain the basic root elements of the object (in fact only the schema tag is visible). When you go to the Schema Tab, on the dashboard you will see a BO Schema zone with a solitary button. This will allow you to Generate the Schema for you from our metadata. Click on the Generate button to generate a basic schema from the metadata. You will now see a Schema with the element tags and references to the metadata of the Maintenance object (in the mapField attribute). I could spend a while outlining all the ways you can change the schema with defaults, formatting, tagging etc but the online help has plenty of great examples to illustrate this. You can use the Schema Tips zone in the for more details of the available customizations. Note: The tags are generated from the language pack you have installed. The sample is English so the tags are in English (which is the base language of all installations). If you are using a language pack then the tags will be generated in the language of the user that generated the object. At this point you can save your Business Object by pressing the Save action. At this point you have a basic Business Object based on the USER maintenance object ready for use but it is not defined as a Web Service yet. To do this you need to define the newly created Business Object as an XAI Inbound Service. The easiest and quickest way is to select + off the XAI Inbound Service off the context menu on the Business Object maintenance screen. This will prepopulate the service definition with the following: Adapter – This will be set to Business Adaptor. This indicates that the service is either Business Object, Business Service or Service Script based. Schema Type – Whether the object is a Business Object, Business Service or Service Script. In this case it is a Business Object. Schema Name – The name of the object. In this case it is the Business Object AS-User. Active – Set to Yes. This means the service is available upon startup automatically. You can enable and disable services as needed. Transaction Type – A default transaction type as this is Business Object Service. More about this in later postings. In our case we use the default Read. This means that if we only specify data and not a transaction type then the product will assume you want to issue a read against the object. You need to fill in the following: XAI Inbound Service – The name of the Web Service. Usually people use the same name as the underlying object , in the case of this example, but this can match your sites interfacing standards. By the way you can define multiple XAI Inbound Services/Web Services against the same object if you want. Description and Detail Description – Documentation for your Web Service. I just supplied some basic documentation for this demonstration. You can now save the service definition. Note: There are lots of other options on this screen that allow for behavior of your service to be specified. I will leave them blank for now. When you save the service you are issued with two new pieces of information. XAI Inbound Service Id is a randomly generated identifier used internally by the XAI Servlet. WSDL URL is the WSDL standard URL used for integration. We will take advantage of that in later posts. An example of the definition is shown below: Now you have defined the service but it will only be available when the next server restart or when you flush the data cache. XAI Inbound Services are cached for performance so the cache needs to be told of this new service. To refresh the cache you can use the Admin –> X –> XAI Command menu item. From the command dropdown select Refresh Registry and press Send Command. You will see an XML of the command sent to the server (the presence of the XML means it is finished). If you have an error around the authorization, then check your default user and password settings on the XAI Options menu item. Be careful with flushing the cache as the cache is shared (unless of course you are the only Web Service user on the system – In that case it only affects you). The Web Service is NOW available to be used. To perform a simple test of your new Web Service, navigate to the Admin –> X –> XAI Submission menu item. You will see an open XML request tab. You need to type in the request XML you want to test in the Main tab. The first tag is the XAI Inbound Service Name and the elements are as per your schema (minus the schema tag itself as that is only used internally). My example is as follows (I want to return the details of user SYSUSER) – Remember to close tags. Hitting the Save button will issue the XML and return the response according to the Business Object schema. Now before you panic, you noticed that it did not ask for credentials. It propagates the online credentials to the service call on this function. You now have a Web Service you can use for integration. We will reuse this information in subsequent posts. The process I just described can be used for ANY object in the system you want to expose. This whole process at a minimum can take under a minute. Obviously I only showed the basics but you can at least get an appreciation of the ease of defining a Web Service (just by using a browser). The next posts now build upon this. Hope you enjoyed the post.

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >