Search Results

Search found 46494 results on 1860 pages for 'public key encryption'.

Page 388/1860 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • C# Custom user settings class not saving

    - by Zenox
    I have the following class: [Serializable] [XmlRoot ( ElementName = "TextData", IsNullable = false)] public class TextData { private System.Drawing.Font fontColor; [XmlAttribute ( AttributeName = "Font" )] public System.Drawing.Font Font { get; set; } [XmlAttribute ( AttributeName = "FontColor" )] public System.Drawing.Color FontColor { get; set; } [XmlAttribute ( AttributeName = "Text" )] public string Text { get; set; } public TextData ( ) { } // End of TextData } // End of TextData And Im attempting to save it with the following code: // Create our font dialog FontDialog fontDialog = new FontDialog ( ); fontDialog.ShowColor = true; // Display the dialog and check for an ok if ( DialogResult.OK == fontDialog.ShowDialog ( ) ) { // Save our changes for the font settings if ( null == Properties.Settings.Default.MainHeadlineTextData ) { Properties.Settings.Default.MainHeadlineTextData = new TextData ( ); } Properties.Settings.Default.MainHeadlineTextData.Font = fontDialog.Font; Properties.Settings.Default.MainHeadlineTextData.FontColor = fontDialog.Color; Properties.Settings.Default.Save ( ); } Everytime I load the the application, the Properties.Settings.Default.MainHeadlineTextData is still null. Saving does not seem to take effect. I read on another post that the class must be public and it is. Any ideas why this would not be working properly?

    Read the article

  • str_replace between two numerical strings

    - by user330840
    Another problem with str_replace, I would like to change the following $title data into URL by taking the $string between number in the beginning and after dash (-) Chicago's Public Schools - $10.3M New Jersey - $3M Michigan: Public Health - $1M The desire output is: chicago-public-school new-jersey michigan-public-health PHP code I am using $title = ucwords(strtolower(strip_tags(str_replace("1: ","",$title)))); $x=1; while($x <= 10) { $title = ucwords(strtolower(strip_tags(str_replace("$x: ","",$title)))); $x++; } $link = preg_replace('/[<>()!#?:.$%\^&=+~`*&#233;"\']/', '',$title); $money = str_replace(" ","-",$link); $link = explode(" - ",$link); $link = preg_replace(" (\(.*?\))", "", $link[0]); $amount = preg_replace(" (\(.*?\))", "", $link[1]); $code_entities_match = array( '&#39;s' ,'&quot;' ,'!' ,'@' ,'#' ,'$' ,'%' ,'^' ,'&' ,'*' ,'(' ,')' ,'+' ,'{' ,'}' ,'|' ,':' ,'"' ,'<' ,'>' ,'?' ,'[' ,']' ,'' ,';' ,"'" ,',' ,'.' ,'_' ,'/' ,'*' ,'+' ,'~' ,'`' ,'=' ,' ' ,'---' ,'--','--'); $code_entities_replace = array('' ,'-' ,'-' ,'' ,'' ,'' ,'-' ,'-' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'-' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'-' ,'' ,'-' ,'-' ,'' ,'' ,'' ,'' ,'' ,'-' ,'-' ,'-','-'); $link = str_replace($code_entities_match, $code_entities_replace, $link); $link = strtolower($link); Unfortunately the result I got: -chicagoamp9s-public-school 2-new-jersey 3-michigan-public-health Anyone has a better solution for this? Thanks guys! (the &#39; changed into amp9 - wonder why?)

    Read the article

  • displaying a dialog using an activity?

    - by ricardo123
    what am i doing wrong here or what do i need to add? package dialog.com; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.app.Dialog; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.Toast; public class Dialog extends Activity { CharSequence [] items = { "google", "apple", "microsoft" }; boolean [] itemschecked = new boolean [items.length]; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button btn = (Button) findViewById(R.id.btn_dialog); btn.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { showDialog(0); } }); } @Override protected Dialog onCreateDialog(int id) { switch(id) { case 0: return new AlertDialog.Builder(this) .setIcon(R.drawable.icon) .setTitle("This is a Dialog with some simple text...") .setPositiveButton("ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichbutton) { Toast.makeText(getBaseContext(), "OK Clicked!", Toast.LENGTH_SHORT).show(); } }); .setNegativeButton("cancel",new DialogInterface.OnclickListener() { public void onClick(DialogInterface dialog, int whichButton) {Toast.makeText(getBaseContext(), "cancel clicked!", Toast.LENGTH_SHORT).show(); } }); .setMultiChoiceItems(itemschecked, new DialogInterface.OnMultiChoiceClickListener() { @Override public void onClick(dialoginterface dialog, int which, boolean isChecked) { Toast.makeText(getBaseContext(), items[which] + (isChecked ? " checked!": "unchecked!"), Toast.LENGTH_SHORT).show(); } } ) .create(); } return null: }}}

    Read the article

  • A self-creator: What pattern is this? php

    - by user151841
    I have several classes that are basically interfaces to database rows. Since the class assumes that a row already exists ( __construct expects a field value ), there is a public static function that allows creation of the row and returns an instance of the class. Here's a pseudo-code example : class fruit { public $id; public function __construct( $id ) { $this->id = $id; $sql = "SELECT * FROM Fruits WHERE id = $id"; ... $this->arrFieldValues[$field] = $row[$value]; } public function __get( $var ) { return $this->arrFieldValues[$var]; } public function __set( $var, $val ) { $sql = "UPDATE fruits SET $var = $val WHERE id = $this->id"; } public static function create( $id ) { $sql = "INSERT INTO Fruits ( fruit_name ) VALUE ( '$fruit' )"; $id = mysql_insert_id(); $fruit = & new fruit($id); return $fruit; } } $obj1 = fruit::create( "apple" ); $obj2 = & new fruit( 12 ); What is this pattern called? Edit: I changed the example to one that has more database-interface functionality. For most of the time, this kind of class would be instantiated normally, through __construct(). But sometimes when you need to create a new row first, you would call create().

    Read the article

  • Why are my thread being terminated ?

    - by Sephy
    Hi, I'm trying to repeat calls to methods through 3 differents threads. But after I start my threads, during the next iteration of my loop, they are all terminated so nothing is executed... The code is as follows : public static void main(String[] args) { main = new Main(); pollingThread.start(); } static Thread pollingThread = new Thread() { @Override public void run() { while (isRunning) { main.poll(); // test the state of the threads try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } }; }; public void poll() { if (clientThread == null) { clientThread = new Thread(new Runnable() { @Override public void run() { //create some objects } }); clientThread.start(); } else if (clientThread.isAlive()) { // do some treatment } if (gestionnaireThread == null) { gestionnaireThread = new Thread(new Runnable() { @Override public void run() { //create some objects }; }); gestionnaireThread.start(); } else if (gestionnaireThread.isAlive()) { // do some treatment } if (marchandThread == null) { marchandThread = new Thread(new Runnable() { @Override public void run() { // create some objects }; }); marchandThread.start(); } else if (marchandThread.isAlive()) { // do some treatment } } And for some reason, when I test the state of my different threads, they appear as runnable and then a the 2nd iteration, they are all terminated... What am I doing wrong? I actually have no error, but the threads are terminated and so my loop keeps looping and telling me the threads are terminated.... Thanks for any help.

    Read the article

  • priority queue implementation

    - by davit-datuashvili
    i have implemented priority queue and i am interested if it is correct public class priqueue { private int n,maxsize; int x[]; void swap(int i,int j){ int t=x[i]; x[i]=x[j]; x[j]=t; } public priqueue(int m){ maxsize=m; x=new int [maxsize+1]; n=0; } void insert(int t){ int i,p; x[++n]=t; for (i=n;i>1 && x[p=i/2] >x[i];i=p) swap(p,i); } public int extramin(){ int i,c; int t=x[1]; x[1]=x[n--]; for (i=1;(c=2*i)<=n;i=c){ if (c+1<=n && x[c+1]<x[c]) c++; if (x[i]<=x[c]) break; swap(c,i); } return t; } public void display(){ for (int j=0;j<x.length;j++){ System.out.println(x[j]); } } } public class priorityqueue { public static void main(String[] args) { priqueue pr=new priqueue(12); pr.insert(20); pr.insert(12); pr.insert(22); pr.insert(15); pr.insert(35); pr.insert(17); pr.insert(40); pr.insert(51); pr.insert(26); pr.insert(19); pr.insert(29); pr.insert(23); pr.extramin(); pr.display(); } } //result: 0 12 15 17 20 19 22 40 51 26 35 29 23

    Read the article

  • Using addMouseListener() and paintComponent() for JPanel

    - by Alex
    This is a follow-up to my previous question. I've simplified things as much as I could, and it still doesn't work! Although the good thing I got around using getGraphics(). A detailed explanation on what goes wrong here is massively appreciated. My suspicion is that something's wrong with the the way I used addMouseListener() method here. import java.awt.Color; import java.awt.Graphics; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import javax.swing.JFrame; import javax.swing.JPanel; public class MainClass1{ private static PaintClass22 inst2 = new PaintClass22(); public static void main(String args[]){ JFrame frame1 = new JFrame(); frame1.add(inst2); frame1.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame1.setTitle("NewPaintToolbox"); frame1.setSize(200, 200); frame1.setLocationRelativeTo(null); frame1.setVisible(true); } } class PaintClass11 extends MouseAdapter{ int xvar; int yvar; static PaintClass22 inst1 = new PaintClass22(); public PaintClass11(){ inst1.addMouseListener(this); inst1.addMouseMotionListener(this); } @Override public void mouseClicked(MouseEvent arg0) { // TODO Auto-generated method stub xvar = arg0.getX(); yvar = arg0.getY(); inst1.return_paint(xvar, yvar); } } class PaintClass22 extends JPanel{ private static int varx; private static int vary; public void return_paint(int input1, int input2){ varx = input1; vary = input2; repaint(varx,vary,10,10); } public void paintComponent(Graphics g){ super.paintComponents(g); g.setColor(Color.RED); g.fillRect(varx, vary, 10, 10); } }

    Read the article

  • Is it possible to Store Enum value in String?

    - by Narasimham K
    Actally my java progrem like... public class Schedule{ public static enum RepeatType { DAILY, WEEKLY, MONTHLY; } public static enum WeekdayType { MONDAY(Calendar.MONDAY), TUESDAY(Calendar.TUESDAY), WEDNESDAY( Calendar.WEDNESDAY), THURSDAY(Calendar.THURSDAY), FRIDAY( Calendar.FRIDAY), SATURDAY(Calendar.SATURDAY), SUNDAY( Calendar.SUNDAY); private int day; private WeekdayType(int day) { this.day = day; } public static List<Date> generateSchedule(RepeatType repeatType,List<WeekdayType> repeatDays) { ----------------------------- ----------------------------//hear some logic i wrote }//Method } And i'm calling the method into my Business class like following... @RemotingInclude public void createEvent(TimetableVO timetableVO) { if ("repeatDays".equals(timetableVO.getSearchKey())) { List<Date> repeatDaysList=Schedule.generateSchedule(timetableVO.getRepeatType(),timetableVO.getRepeatDays()); } } And Finally TimetableVO is @Entity @Table(name="EC_TIMETABLE") public class TimetableVO extends AbstractVO{ ----- private RepeatType repeatType; private List<WeekdayType> repeatDays;//But in this case the method generateSchedule(-,-) was not calling. ----- } So my Question is Which one is Better Statement in the Following... private List<WeekdayType> repeatDays; (or) private String repeatDays;//if we give like this `How to Convert Enum type to String` because generateSchedule() method taking enum type value....

    Read the article

  • SSH problems (ssh_exchange_identification: read: Connection reset by peer)

    - by kSiR
    I was running 11.10 and decided to do the full upgrade and come up to 12.04 after the update SSH (not SSHD) is now misbehaving when attempting to connect to other OpenSSH instances. I say OpenSSH as I am running a DropBear sshd on my router and I am able to connect to it. When attempting to connect to an OpenSSH server risk@skynet:~/.ssh$ ssh -vvv risk@someserver OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /home/risk/.ssh/config debug3: key names ok: [[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss] debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to someserver [someserver] port 22. debug1: Connection established. debug1: identity file /home/risk/.ssh/id_rsa type -1 debug1: identity file /home/risk/.ssh/id_rsa-cert type -1 debug1: identity file /home/risk/.ssh/id_dsa type -1 debug1: identity file /home/risk/.ssh/id_dsa-cert type -1 debug3: Incorrect RSA1 identifier debug3: Could not load "/home/risk/.ssh/id_ecdsa" as a RSA1 public key debug1: identity file /home/risk/.ssh/id_ecdsa type 3 debug1: Checking blacklist file /usr/share/ssh/blacklist.ECDSA-521 debug1: Checking blacklist file /etc/ssh/blacklist.ECDSA-521 debug1: identity file /home/risk/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: read: Connection reset by peer risk@skynet:~/.ssh$ DropBear instance risk@skynet:~/.ssh$ ssh -vvv root@darkness OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /home/risk/.ssh/config debug3: key names ok: [[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss] debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to darkness [192.168.1.1] port 22. debug1: Connection established. debug1: identity file /home/risk/.ssh/id_rsa type -1 debug1: identity file /home/risk/.ssh/id_rsa-cert type -1 debug1: identity file /home/risk/.ssh/id_dsa type -1 debug1: identity file /home/risk/.ssh/id_dsa-cert type -1 debug3: Incorrect RSA1 identifier debug3: Could not load "/home/risk/.ssh/id_ecdsa" as a RSA1 public key debug1: identity file /home/risk/.ssh/id_ecdsa type 3 debug1: Checking blacklist file /usr/share/ssh/blacklist.ECDSA-521 debug1: Checking blacklist file /etc/ssh/blacklist.ECDSA-521 debug1: identity file /home/risk/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version dropbear_0.52 debug1: no match: dropbear_0.52 ... I have googled and ran most ALL fixes recommend both from the Debian and Arch sides and none of them seem to resolve my issue. Any ideas?

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Quartz.Net Windows Service Configure Logging

    - by Tarun Arora
    In this blog post I’ll be covering, Logging for Quartz.Net Windows Service 01 – Why doesn’t Quartz.Net Windows Service log by default 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc 03 – Results: Logging in action If you are new to Quartz.Net I would recommend going through, A brief Introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service Writing & Scheduling your First HelloWorld job with Quartz.Net   01 – Why doesn’t Quartz.Net Windows Service log by default If you are trying to figure out why… The Quartz.Net windows service isn’t logging The Quartz.Net windows service isn’t writing anything to the event log The Quartz.Net windows service isn’t writing anything to a file How do I configure Quartz.Net windows service to use log4Net How do I change the level of logging for Quartz.Net Look no further, This blog post should help you answer these questions. Quartz.NET uses the Common.Logging framework for all of its logging needs. If you navigate to the directory where Quartz.Net Windows Service is installed (I have the service installed in C:\Program Files (x86)\Quartz.net, you can find out the location by looking at the properties of the service) and open ‘Quartz.Server.exe.config’ you’ll see that the Quartz.Net is already set up for logging to ConsoleAppender and EventLogAppender, but only ‘ConsoleAppender’ is set up as active. So, unless you have the console associated to the Quartz.Net service you won’t be able to see any logging. <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <root> <level value="INFO" /> <appender-ref ref="ConsoleAppender" /> <!-- uncomment to enable event log appending --> <!-- <appender-ref ref="EventLogAppender" /> --> </root> </log4net> Problem: In the configuration above Quartz.Net Windows Service only has ConsoleAppender active. So, no logging will be done to EventLog. More over the RollingFileAppender isn’t setup at all. So, Quartz.Net will not log to an application trace log file. 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc Let’s change this behaviour by changing the config file… In the below config file, I have added the RollingFileAppender. This will configure Quartz.Net service to write to a log file. (<appender name="GeneralLog" type="log4net.Appender.RollingFileAppender">) I have specified the location for the log file (<arg key="configFile" value="Trace/application.log.txt"/>) I have enabled the EventLogAppender and RollingFileAppender to be written to by Quartz. Net windows service Changed the default level of logging from ‘Info’ to ‘All’. This means all activity performed by Quartz.Net Windows service will be logged. You might want to tune this back to ‘Debug’ or ‘Info’ later as logging ‘All’ will produce too much data to the logs. (<level value="ALL"/>) Since I have changed the logging level to ‘All’, I have added applicationSetting to remove logging log4Net internal debugging. (<add key="log4net.Internal.Debug" value="false"/>) <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> <sectionGroup name="common"> <section name="logging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging" /> </sectionGroup> </configSections> <common> <logging> <factoryAdapter type="Common.Logging.Log4Net.Log4NetLoggerFactoryAdapter, Common.Logging.Log4net"> <arg key="configType" value="INLINE" /> <arg key="configFile" value="Trace/application.log.txt"/> <arg key="level" value="ALL" /> </factoryAdapter> </logging> </common> <appSettings> <add key="log4net.Internal.Debug" value="false"/> </appSettings> <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="GeneralLog" type="log4net.Appender.RollingFileAppender"> <file value="Trace/application.log.txt"/> <appendToFile value="true"/> <maximumFileSize value="1024KB"/> <rollingStyle value="Size"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{HH:mm:ss} [%t] %-5p %c - %m%n"/> </layout> </appender> <root> <level value="ALL" /> <appender-ref ref="ConsoleAppender" /> <appender-ref ref="EventLogAppender" /> <appender-ref ref="GeneralLog"/> </root> </log4net> </configuration>   Note – Please ensure you restart the Quartz.Net Windows service for the config changes to be picked up by the service   03 – Results: Logging in action Once you start the Quartz.Net Windows Service up, the logging should be initiated to write all activities in the Console, EventLog and File… See screen shots below… Figure – Quartz.Net Windows Service logging all activity to the event log Figure – Quartz.Net Windows Service logging all activity to the application log file Where is the output from log4Net ConsoleAppender? As a default behaviour, the console isn't available in windows services, web services, windows forms. The output will simply be dismissed. Unless you are running the process interactively. Which you can do by firing up Quartz.Server.exe –i to see the output   This was fourth in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering troubleshooting why a scheduled task hasn’t fired on Quartz.net windows service. All Quartz.Net specific blog posts can listed here. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • Create and use a Button class on AS3.0

    - by Madcowe
    I am currently working on a game and it is all going well. On the shop screen there are several buttons that affect the player's stats for when the player restarts the game. The button's names (with a text on the left), however, are rather cryptic and it's hard to figure out what they do unless you test or something. So the solution I came up with, is to create an InfoBox with an InfoText inside so that when the cursor is over the button it appears with the description, cost and etc. This I managed to do too however, the way I was about to do it would mean that I had to create 3 event listeners per button (CLICK, ROLL_OVER, ROLL_OUT) and, obviously, 3 functions connected to each event listener. Now, I don't mind much about having 1 event listener per button, for the click, but since the other events are just to make a box appear and disappear as well as display some text, I thought it was way too much of a mess of code. What I tried to do: I created a new class called InfoBoxButton, and this is the class' code: package { import flash.display.SimpleButton; import flash.display.MovieClip; import flash.ui.Mouse; import flash.events.MouseEvent; public class InfoBoxButton extends SimpleButton { public var description:String; public var infoBox:InfoBox; public function InfoBoxButton(description) { this.addEventListener( MouseEvent.ROLL_OVER, displayInfoText, false, 0, true); this.addEventListener( MouseEvent.ROLL_OUT, hideInfoText, false, 0, true); } private function displayInfoText() { infoBox.infoText.text = description; infoBox.visible = true; } private function hideInfoText() { infoBox.infoText.text = ""; infoBox.visible = false; } } } But now I don't have an idea how to associate it with the button, I have tried this: public var SoonButton:InfoBoxButton = new InfoBoxButton("This is merely a test"); The SoonButton is a button I made on the shopscreen, SoonButton is it's instance name, but I can't think of a way of associating one button to the other... I have been fiddling with the code for like 3 hours yesterday and no luck... can anyone give me some pointers on how I should go about doing it?

    Read the article

  • LINQ Query using Multiple From and Multiple Collections

    1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5:  6: namespace ConsoleApplication2 7: { 8: class Program 9: { 10: static void Main(string[] args) 11: { 12: var emps = GetEmployees(); 13: var deps = GetDepartments(); 14:  15: var results = from e in emps 16: from d in deps 17: where e.EmpNo >= 1 && d.DeptNo <= 30 18: select new { Emp = e, Dept = d }; 19: 20: foreach (var item in results) 21: { 22: Console.WriteLine("{0},{1},{2},{3}", item.Dept.DeptNo, item.Dept.DName, item.Emp.EmpNo, item.Emp.EmpName); 23: } 24: } 25:  26: private static List<Emp> GetEmployees() 27: { 28: return new List<Emp>() { 29: new Emp() { EmpNo = 1, EmpName = "Smith", DeptNo = 10 }, 30: new Emp() { EmpNo = 2, EmpName = "Narayan", DeptNo = 20 }, 31: new Emp() { EmpNo = 3, EmpName = "Rishi", DeptNo = 30 }, 32: new Emp() { EmpNo = 4, EmpName = "Guru", DeptNo = 10 }, 33: new Emp() { EmpNo = 5, EmpName = "Priya", DeptNo = 20 }, 34: new Emp() { EmpNo = 6, EmpName = "Riya", DeptNo = 10 } 35: }; 36: } 37:  38: private static List<Department> GetDepartments() 39: { 40: return new List<Department>() { 41: new Department() { DeptNo=10, DName="Accounts" }, 42: new Department() { DeptNo=20, DName="Finance" }, 43: new Department() { DeptNo=30, DName="Travel" } 44: }; 45: } 46: } 47:  48: class Emp 49: { 50: public int EmpNo { get; set; } 51: public string EmpName { get; set; } 52: public int DeptNo { get; set; } 53: } 54:  55: class Department 56: { 57: public int DeptNo { get; set; } 58: public String DName { get; set; } 59: } 60: } span.fullpost {display:none;}

    Read the article

  • box2D simulation doesn't work

    - by shadow_of__soul
    has been a while since last time i used box2D, and i needed to make some stuff, and i saw that my simulation don't worked (compiles, but do anything). i haven't been able to even have working the examples or this simple example i'm pasting below: package { import flash.display.Sprite; import flash.events.Event; import Box2D.Common.Math.b2Vec2; import Box2D.Dynamics.b2World; import Box2D.Dynamics.b2BodyDef; import Box2D.Dynamics.b2Body; import Box2D.Collision.Shapes.b2CircleShape; import Box2D.Dynamics.b2Fixture; import Box2D.Dynamics.b2FixtureDef; import org.flashdevelop.utils.FlashConnect; import flash.events.TimerEvent; import flash.utils.Timer; public class Main extends Sprite { public var world:b2World; public var wheelBody:b2Body; public var stepTimer:Timer; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); var gravity:b2Vec2 = new b2Vec2(0, 10); world = new b2World(gravity, true); var wheelBodyDef:b2BodyDef = new b2BodyDef(); wheelBodyDef.type = b2Body.b2_dynamicBody; wheelBody = world.CreateBody(wheelBodyDef); var circleShape:b2CircleShape = new b2CircleShape(5); var wheelFixtureDef:b2FixtureDef = new b2FixtureDef(); wheelFixtureDef.shape = circleShape; var wheelFixture:b2Fixture = wheelBody.CreateFixture(wheelFixtureDef); stepTimer = new Timer(0.025 * 1000); stepTimer.addEventListener(TimerEvent.TIMER, onTick); FlashConnect.trace(wheelBody.GetPosition().x, wheelBody.GetPosition().y); stepTimer.start(); // entry point } private function onTick(a_event:TimerEvent):void { world.Step(0.025, 10, 10); FlashConnect.trace(wheelBody.GetPosition().x, wheelBody.GetPosition().y); } } } on this, the object should fall down, but the positions reported me by the trace method, are always 0. so is not a display problem, that i see everything freeze, is why the simulation is not working, and i have no idea why :( can anyone point me to the right direction of where i need to look for the problem? my settings are: windows 7 flashdevelop 4.2.1 SDK: 4.6.0 compiling for flash 10, but i tried every target i have available (till flash 11.5) project set at 30fps

    Read the article

  • Earthquake Locator - Live Demo and Source Code

    - by Bobby Diaz
    Quick Links Live Demo Source Code I finally got a live demo up and running!  I signed up for a shared hosting account over at discountasp.net so I could post a working version of the Earthquake Locator application, but ran into a few minor issues related to RIA Services.  Thankfully, Tim Heuer had already encountered and explained all of the problems I had along with solutions to these and other common pitfalls.  You can find his blog post here.  The ones that got me were the default authentication tag being set to Windows instead of Forms, needed to add the <baseAddressPrefixFilters> tag since I was running on a shared server using host headers, and finally the Multiple Authentication Schemes settings in the IIS7 Manager.   To get the demo application ready, I pulled down local copies of the earthquake data feeds that the application can use instead of pulling from the USGS web site.  I basically added the feed URL as an app setting in the web.config:       <appSettings>         <!-- USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/ -->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="~/Demo/1day-M2.5.xml" />-->         <add key="FeedUrl"              value="~/Demo/7day-M2.5.xml" />     </appSettings> You will need to do the same if you want to run from local copies of the feed data.  I also made the following minor changes to the EarthquakeService class so that it gets the FeedUrl from the web.config:       private static readonly string FeedUrl = ConfigurationManager.AppSettings["FeedUrl"];       /// <summary>     /// Gets the feed at the specified URL.     /// </summary>     /// <param name="url">The URL.</param>     /// <returns>A <see cref="SyndicationFeed"/> object.</returns>     public static SyndicationFeed GetFeed(String url)     {         SyndicationFeed feed = null;           if ( !String.IsNullOrEmpty(url) && url.StartsWith("~") )         {             // resolve virtual path to physical file system             url = System.Web.HttpContext.Current.Server.MapPath(url);         }           try         {             log.Debug("Loading RSS feed: " + url);               using ( var reader = XmlReader.Create(url) )             {                 feed = SyndicationFeed.Load(reader);             }         }         catch ( Exception ex )         {             log.Error("Error occurred while loading RSS feed: " + url, ex);         }           return feed;     } You can now view the live demo or download the source code here, but be sure you have WCF RIA Services installed before running the application locally and make sure the FeedUrl is pointing to a valid location.  Please let me know if you have any comments or if you run into any issues with the code.   Enjoy!

    Read the article

  • SQLAuthority News SQL Server 2008 R2 Update for Developers Training Kit (March 2010 Update)

    SQL Server 2008 R2 offers an impressive array of capabilities for developers that build upon key innovations introduced in SQL Server 2008. The SQL Server 2008 R2 Update for Developers Training Kit is ideal for developers who want to understand how to take advantage of the key improvements introduced in SQL [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • OWB 11gR2 &ndash; Degenerate Dimensions

    - by David Allan
    Ever wondered how to build degenerate dimensions in OWB and get the benefits of slowly changing dimensions and cube loading? Now its possible through some changes in 11gR2 to make the dimension and cube loading much more flexible. This will let you get the benefits of OWB's surrogate key handling and slowly changing dimension reference when loading the fact table and need degenerate dimensions (see Ralph Kimball's degenerate dimensions design tip). Here we will see how to use the cube operator to load slowly changing, regular and degenerate dimensions. The cube and cube operator can now work with dimensions which have no surrogate key as well as dimensions with surrogates, so you can get the benefit of the cube loading and incorporate the degenerate dimension loading. What you need to do is create a dimension in OWB that is purely used for ETL metadata; the dimension itself is never deployed (its table is, but has not data) it has no surrogate keys has a single level with a business attribute the degenerate dimension data and a dummy attribute, say description just to pass the OWB validation. When this degenerate dimension is added into a cube, you will need to configure the fact table created and set the 'Deployable' flag to FALSE for the foreign key generated to the degenerate dimension table. The degenerate dimension reference will then be in the cube operator and used when matching. Create the degenerate dimension using the regular wizard. Delete the Surrogate ID attribute, this is not needed. Define a level name for the dimension member (any name). After the wizard has completed, in the editor delete the hierarchy STANDARD that was automatically generated, there is only a single level, no need for a hierarchy and this shouldn't really be created. Deploy the implementing table DD_ORDERNUMBER_TAB, this needs to be deployed but with no data (the mapping here will do a left outer join of the source data with the empty degenerate dimension table). Now, go ahead and build your cube, use the regular TIMES dimension for example and your degenerate dimension DD_ORDERNUMBER, can add in SCD dimensions etc. Configure the fact table created and set Deployable to false, so the foreign key does not get generated. Can now use the cube in a mapping and load data into the fact table via the cube operator, this will look after surrogate lookups and slowly changing dimension references.   If you generate the SQL you will see the ON clause for matching includes the columns representing the degenerate dimension columns. Here we have seen how this use case for loading fact tables using degenerate dimensions becomes a whole lot simpler using OWB 11gR2. I'm sure there are other use cases where using this mix of dimensions with surrogate and regular identifiers is useful, Fact tables partitioned by date columns is another classic example that this will greatly help and make the cube operator much more useful. Good to hear any comments.

    Read the article

  • Understanding Oracle: Demystifying OpenWorld

    - by mseika
    Seminar: Wednesday 24th October 2012: Avnet, Bracknell Oracle OpenWorld is the world's largest event dedicated to helping enterprises harness the power of technology, during a full week in October. Oracle Corporation always uses Oracle OpenWorld to make its most important product announcements, and this year is no exception. We realise that not all our partners can attend this prestigious event in San Francisco, primarily due to time and cost pressures. Oracle OpenWorld is the only conference that goes this deep and wide with Oracle technology, providing thousands of sessions and hundreds of demonstrations geared toward helping partners and customers get better results with the technology it has —and plan strategically for the technology it will need to keep ahead of the competition in the years to come. With the sheer number of announcements planned, it is sometimes difficult to find your way through the fog and identify the opportunities relevant to your business to take advantage of, this coming year. So why not engage with the Oracle's UK team via Avnet and get the announcements shared with you face-to-face, in the UK? As a key Value Added Distributor of Oracle Applications, Technology and Hardware solutions, Avnet has been attending Oracle OpenWorld for a number of years and invites our partners to attend a half day summary event which will share the keynote announcements. We will also help prioritise for you the announcements of greatest interest and business opportunity for the UK channel. Agenda Time Module 12:00-13:15 Registration and lunch 13:15-14:00 Introductions and Key Hardware announcements Discover how Oracle's complete and integrated application-aware virtualization solutions, including virtualization for SPARC and x86 architectures, can help you gain better efficiencies across your business. Get updates on how Oracle storage products and solutions can accelerate database performance, improve application responsiveness, and meet your data protection needs. 14:00-14:15 Q&A and Break 14:15-15:00 Key Technology announcements Technology products, encompassing Oracle's Database 12c and Middleware, are revolutionizing the industry with record-breaking performance, helping customers consolidate onto private clouds and achieve high returns on investment. 15:00-15:15 Q&A and Break 15:15-16:00 Key Applications announcements Presentations focused on Oracle's strategy and vision for its applications business, including Oracle E-Business Suite; Oracle's PeopleSoft, JD Edwards, Siebel, Hyperion, and Agile products; and the newly available Oracle Fusion Applications. 16:00-16:30 Oracle-on-Oracle announcements & business opportunities with Avnet Learn about Oracle's cloud computing and Oracle-on-Oracle strategies and find out more about Oracle's engineered systems for the broad market 16:30 Close * Please note agenda may be subject to change What do you need to do now Register now or for more information email our Oracle events team at [email protected]. N.B. Places are limited, so please register early to avoid disappointment.

    Read the article

  • How to install Windows 8 to dual boot with Windows 7/XP?

    - by Gopinath
    Microsoft released Windows 8 beta(customer preview) few days ago and yesterday I had a chance to install it on one of my home computers. My home PC is running on Windows 7 and I would like to install Windows 8 side by side so that I can dual boot. The installation process was pretty simple and with in 40 minutes my PC was up and running with beautiful Windows 8 OS along with Windows 7. In this post I want to share my experience and provide information for you to install Windows 8. 1. Identify a drive  with at least 20 GB of space – Identify one of the drives on your hard disk that can be used to install Windows 8. Delete all the files or preferably quick format it and make sure that it has at least 20 GB of free space. Rename the drive name to Windows 8 so that it will be helpful to identify the destination drive during installation process. 2. Download Windows 8 installer ISO– Go to Microsoft’s website and download Windows 8 ISO file which is approximately 2.5 GB file(32 bit English version). 3. Create Windows 8 bootable USB/DVD – Its advised to launch Windows 8 installer using a bootable USB or DVD for enabling dual boot instead of unzipping the ISO file and launching the setup from Windows 7 OS. Also consider creating bootable USB instead of bootable DVD to save a disc. To create bootable USB/DVD follow these steps Download and install the Windows 7 DVD / USB tool available at microsoftstore.com Launch the utility and follow the onscreen instructions where you would be asked to choose the ISO file(point to file downloaded in step 2) and choose a USB drive or DVD as destination. The onscreen instructions are very simple and you would be able to complete it in 20 minutes time. So now you have Windows 8 installation setup on your USB drive or DVD. 4. Change BIOS settings to boot from USB/DVD – Restart your PC and open BIOS configuration settings key by pressing F2 or  F12 or DELETE key (the key depends on your computer manufacturer). Go to boot sequence options and make sure that USB/DVD is ahead of hard disk in the boot sequence. Save the settings and restart the PC. 5. Install Windows 8 – After the restart you should be straight into Windows 8 installation screen. Follow the onscreen instructions and install Windows 8 on the drive that is identified during step 1. When prompted for product serial key enter NF32V-Q9P3W-7DR7Y-JGWRW-JFCK8. The installer would restart couple of times during the installation process. On the first restart, make sure that you remove USB/DVD. Windows 8 installation process is pretty simple and very quick. The complete process of creating bootable USB and installation should complete in 30 – 40 minutes time.

    Read the article

  • Stairway to SQL Server Indexes: Step 1, Introduction to Indexes

    Indexes are the database objects that enable SQL Server to satisfy each data access request from a client application with the minimum amount of effort, resulting in the maximum performance of individual requests while also reducing the impact of one request upon another. Prerequisites: Familiarity with the following relational database concepts: Table, row, primary key, foreign key Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Problem to match font size to the screen resolution in libgdx

    - by Iñaki Bedoya
    I'm having problems to show text on my game at same size on different screens, and I did a simple test. This test consists to show a text fitting at the screen, I want the text has the same size independently from the screen and from DPI. I've found this and this answer that I think should solve my problem but don't. In desktop the size is ok, but in my phone is too big. This is the result on my Nexus 4: (768x1280, 2.0 density) And this is the result on my MacBook: (480x800, 0.6875 density) I'm using the Open Sans Condensed (link to google fonts) As you can see on desktop looks good, but on the phone is so big. Here the code of my test: public class TextTest extends ApplicationAdapter { private static final String TAG = TextTest.class.getName(); private static final String TEXT = "Tap the screen to start"; private OrthographicCamera camera; private Viewport viewport; private SpriteBatch batch; private BitmapFont font; @Override public void create () { Gdx.app.log(TAG, "Screen size: "+Gdx.graphics.getWidth()+"x"+Gdx.graphics.getHeight()); Gdx.app.log(TAG, "Density: "+Gdx.graphics.getDensity()); camera = new OrthographicCamera(); viewport = new ExtendViewport(Gdx.graphics.getWidth(), Gdx.graphics.getWidth(), camera); batch = new SpriteBatch(); FreeTypeFontGenerator generator = new FreeTypeFontGenerator(Gdx.files.internal("fonts/OpenSans-CondLight.ttf")); font = createFont(generator, 64); generator.dispose(); } private BitmapFont createFont(FreeTypeFontGenerator generator, float dp) { FreeTypeFontGenerator.FreeTypeFontParameter parameter = new FreeTypeFontGenerator.FreeTypeFontParameter(); int fontSize = (int)(dp * Gdx.graphics.getDensity()); parameter.size = fontSize; Gdx.app.log(TAG, "Font size: "+fontSize+"px"); return generator.generateFont(parameter); } @Override public void render () { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); int w = -(int)(font.getBounds(TEXT).width / 2); batch.setProjectionMatrix(camera.combined); batch.begin(); font.setColor(Color.BLACK); font.draw(batch, TEXT, w, 0); batch.end(); } @Override public void resize(int width, int height) { viewport.update(width, height); } @Override public void dispose() { font.dispose(); batch.dispose(); } } I'm trying to find a neat way to fix this. What I'm doing wrong? is the camera? the viewport? UPDATE: What I want is to keep the same margins in proportion, independently of the screen size or resolution. This image illustrates what I mean.

    Read the article

  • Invitation: WebCenter Implementation Specialist Exam Preparation Webcasts

    - by rituchhibber
    Oracle Partner Network would like to invite you to Refresh Courses for WebCenter Content and WebCenter Portal, to help partners to prepare for the WebCenter Implementation Specialist EXAMS.This is a 3 hours intensive refresher partner-only training session, providing attendees with an overview of WebCenter Content and WebCenter Portal functions and related topics. After the refresher part you will be able to take the relevant Implementation Specialist EXAM depending on your personal focus. NOTE: This is only suitable for experienced WebCenter Content or WebCenter Portal practitioners Who should attend?Partner Consultants who want to become an Oracle WebCenter Content or a WebCenter Portal Certified Implementation Specialist or both, that will help them to differentiate themselves in front of customers and support their Companies to become Specialized. Webcast Details: Date Topic Speaker  Web Call Details  Intercall Details  December 14th WebCenter Content RefreshCourse Markus Neubauer, SilburyWebCenter Content Specialized Partner Join Webcast Dial-in numbers:CC/SP: 1579222/9221 Time: 12:00 -15:00 CET Break around 13:30 Conference ID/Key: 9249533/1412 Date Topic Speaker Web Call Details Intercall Details January 10th                  WebCenter Portal    Refresh Course                   Yannick Ongena, InfoMentumWebCenter Portal Specialized Partner                     Join Webcast Dial-in numbers:CC/SP: 1579222/9221 Time: 12:00 -15:00 CET Break around 13:30 Conference ID/Key: 9249375/1001 Date Topic Speaker Web Call Details Intercall Details February 22nd                WebCenter Content  RefreshCourse Markus Neubauer, SilburyWebCenter Content Specialized Partner    Join Webcast Dial-in numbers:CC/SP: 1579222/9221 Time: 12:00 -15:00 CET Break around13:30 Conference ID/Key: 9249541/2202 Date Topic Speaker Web Call Details Intercall Details  March 13th                WebCenter Portal   Refresh     Course      Yannick Ongena, InfoMentumWebCenter Portal Specialized Partner    Join Webcast Dial-in numbers:CC/SP: 1579222/9221 Time: 12:00 -15:00 CET Break around 13:30 Conference ID/Key: 9249549/1303 Local dial-in numbers can be found here . Next Steps:After the Webcast you will receive the Training material and FREE Vouchers to book and take the: Oracle ECM 11g Certified Implementation Specialist EXAM Oracle WebCenter 11g Essentials EXAM Booking with Voucher can be done on www.pearsonvue.com. Note: FREE Vouchers will be send after attending the webcast.

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >