Search Results

Search found 2000 results on 80 pages for 'robert conn'.

Page 8/80 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Java: Making concurrent MySQL queries from multiple clients synchronised

    - by Misha Gale
    I work at a gaming cybercafe, and we've got a system here (smartlaunch) which keeps track of game licenses. I've written a program which interfaces with this system (actually, with it's backend MySQL database). The program is meant to be run on a client PC and (1) query the database to select an unused license from the pool available, then (2) mark this license as in use by the client PC. The problem is, I've got a concurrency bug. The program is meant to be launched simultaneously on multiple machines, and when this happens, some machines often try and acquire the same license. I think that this is because steps (1) and (2) are not synchronised, i.e. one program determines that license #5 is available and selects it, but before it can mark #5 as in use another copy of the program on another PC tries to grab that same license. I've tried to solve this problem by using transactions and table locking, but it doesn't seem to make any difference - Am I doing this right? Here follows the code in question: public LicenseKey Acquire() throws SmartLaunchException, SQLException { Connection conn = SmartLaunchDB.getConnection(); int PCID = SmartLaunchDB.getCurrentPCID(); conn.createStatement().execute("LOCK TABLE `licensekeys` WRITE"); String sql = "SELECT * FROM `licensekeys` WHERE `InUseByPC` = 0 AND LicenseSetupID = ? ORDER BY `ID` DESC LIMIT 1"; PreparedStatement statement = conn.prepareStatement(sql); statement.setInt(1, this.id); ResultSet results = statement.executeQuery(); if (results.next()) { int licenseID = results.getInt("ID"); sql = "UPDATE `licensekeys` SET `InUseByPC` = ? WHERE `ID` = ?"; statement = conn.prepareStatement(sql); statement.setInt(1, PCID); statement.setInt(2, licenseID); statement.executeUpdate(); statement.close(); conn.commit(); conn.createStatement().execute("UNLOCK TABLES"); return new LicenseKey(results.getInt("ID"), this, results.getString("LicenseKey"), results.getInt("LicenseKeyType")); } else { throw new SmartLaunchException("All licenses of type " + this.name + "are in use"); } }

    Read the article

  • output parameter into label

    - by MyHeadHurts
    instead of returning my output paremeter value in my stored procedure to my label it returns the default value i set my output parameter to. why cant i put my output parameter into my text label Dim reader As SqlDataReader cmd.Parameters.AddWithValue("@tour", "2365") cmd.Parameters.Add("@tourname", SqlDbType.VarChar) cmd.Parameters("@tourname").Direction = ParameterDirection.Output cmd.CommandText = "test" cmd.CommandType = CommandType.StoredProcedure cmd.Connection = conn conn.Open() reader = cmd.ExecuteReader() Dim myTable As DataTable = New DataTable() myTable.Load(reader) DropDownList1.DataSource = myTable DropDownList1.DataTextField = "ddate7" DropDownList1.DataBind() Label1.Text = cmd.Parameters("@tourname").ToString conn.Close()

    Read the article

  • How To perform a SQL Query to DataTable Operation That Can Be Cancelled

    - by David W
    I tried to make the title as specific as possible. Basically what I have running inside a backgroundworker thread now is some code that looks like: SqlConnection conn = new SqlConnection(connstring); SqlCommand cmd = new SqlCommand(query, conn); conn.Open(); SqlDataAdapter sda = new SqlDataAdapter(cmd); sda.Fill(Results); conn.Close(); sda.Dispose(); Where query is a string representing a large, time consuming query, and conn is the connection object. My problem now is I need a stop button. I've come to realize killing the backgroundworker would be worthless because I still want to keep what results are left over after the query is canceled. Plus it wouldn't be able to check the canceled state until after the query. What I've come up with so far: I've been trying to conceptualize how to handle this efficiently without taking too big of a performance hit. My idea was to use a SqlDataReader to read the data from the query piece at a time so that I had a "loop" to check a flag I could set from the GUI via a button. The problem is as far as I know I can't use the Load() method of a datatable and still be able to cancel the sqlcommand. If I'm wrong please let me know because that would make cancelling slightly easier. In light of what I discovered I came to the realization I may only be able to cancel the sqlcommand mid-query if I did something like the below (pseudo-code): while(reader.Read()) { //check flag status //if it is set to 'kill' fire off the kill thread //otherwise populate the datatable with what was read } However, it would seem to me this would be highly ineffective and possibly costly. Is this the only way to kill a sqlcommand in progress that absolutely needs to be in a datatable? Any help would be appreciated!

    Read the article

  • What's the best way to return different cells depending on position in a UITableView

    - by Robert Conn
    I have a grouped UITableView that has 3 sections and a number of cells in each section that each need to be a different custom cell, with different display requirements. I currently have a large if statement in cellForRowAtIndexPath, with each branch instantiating and returning the appropriate custom cell based on interrogating the indexPath's section and row properties. e.g. if (indexPath.section == 0 && indexPath.row == 0) { // instantiate and return cell A } else if (indexPath.section == 1 && indexPath.row == 2) { // instantiate and return cell B } //etc etc What is best practice for this scenario? A large if statement does the job, but is it the best implementation?

    Read the article

  • MySQL Queries using Doctrine & CodeIgniter

    - by 01010011
    Hi, How do I write plane SQL queries using Doctrine connection object and display the results? For example, how do I perform: SELECT * FROM table_name WHERE column_name LIKE '%anything_similar_to_this%'; using Doctrine something like this (this example does not work) $search_key = 'search_for_this'; $conn = Doctrine_Manager::connection(); $conn->execute('SELECT * FROM table_name WHERE column_name LIKE ?)', $search_key); echo $conn;

    Read the article

  • ob_start() -> ob_flush() doesn't work

    - by MB34
    I am using ob_start()/ob_flush() to, hopefully, give me some progress during a long import operation. Here is a simple outline of what I'm doing: <?php ob_start (); echo "Connecting to download Inventory file.<br>"; $conn = ftp_connect($ftp_site) or die("Could not connect"); echo "Logging into site download Inventory file.<br>"; ftp_login($conn,$ftp_username,$ftp_password) or die("Bad login credentials for ". $ftp_site); echo "Changing directory on download Inventory file.<br>"; ftp_chdir($conn,"INV") or die("could not change directory to INV"); // connection, local, remote, type, resume $localname = "INV"."_".date("m")."_".date('d').".csv"; echo "Downloading Inventory file to:".$localname."<br>"; ob_flush(); flush(); sleep(5); if (ftp_get($conn,$localname,"INV.csv",FTP_ASCII)) { echo "New Inventory File Downloaded<br>"; $datapath = $localname; ftp_close($conn); } else { ftp_close($conn); die("There was a problem downloading the Inventory file."); } ob_flush(); flush(); sleep(5); $csvfile = fopen($datapath, "r"); // open csv file $x = 1; // skip the header line $line = fgetcsv($csvfile); $y = (feof($csvfile) ? 2 : 5); while ((!$debug) ? (!feof($csvfile)) : $x <= $y) { $x++; $line = fgetcsv($csvfile); // do a lot of import stuff here with $line ob_flush(); flush(); sleep(1); } fclose($csvfile); // important: close the file ob_end_clean(); However, nothing is being output to the screen at all. I know the data file is getting downloaded because I watch the directory where it is being placed. I also know that the import is happening, meaning that it is in the while loop, because I can monitor the DB and records are being inserted. Any ideas as to why I am not getting output to the screen?

    Read the article

  • transaction handling in dataset based insert/update in c#

    - by user3703611
    I am trying to insert bulk records in a sql server database table using dataset. But i am unable to do transaction handling. Please help me to apply transaction handling in below code. I am using adapter.UpdateCommand.Transaction = trans; but this line give me an error of Object reference not set to an instance of an object. Code: string ConnectionString = "server=localhost\\sqlexpress;database=WindowsApp;Integrated Security=SSPI;"; SqlConnection conn = new SqlConnection(ConnectionString); conn.Open(); SqlTransaction trans = conn.BeginTransaction(IsolationLevel.Serializable); SqlDataAdapter adapter = new SqlDataAdapter("SELECT * FROM Test ORDER BY Id", conn); SqlCommandBuilder builder = new SqlCommandBuilder(adapter); adapter.UpdateCommand.Transaction = trans; // Create a dataset object DataSet ds = new DataSet("TestSet"); adapter.Fill(ds, "Test"); // Create a data table object and add a new row DataTable TestTable = ds.Tables["Test"]; for (int i=1;i<=50;i++) { DataRow row = TestTable.NewRow(); row["Id"] = i; TestTable .Rows.Add(row); } // Update data adapter adapter.Update(ds, "Test"); trans.Commit(); conn.Close();

    Read the article

  • Iterating over a database column in Django

    - by curious
    I would like to iterate a calculation over a column of values in a MySQL database. I wondered if Django had any built-in functionality for doing this. Previously, I have just used the following to store each column as a list of tuples with the name table_column: import MySQLdb import sys try: conn = MySQLdb.connect (host = "localhost", user = "user", passwd="passwd", db="db") except MySQLdb.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) sys.exit (1) cursor = conn.cursor() for table in ['foo', 'bar']: for column in ['foobar1', 'foobar2']: cursor.execute('select %s from %s' % (column, table)) exec "%s_%s = cursor.fetchall()" % (table, column) cursor.close() conn.commit() conn.close() Is there any functionality built into Django to more conveniently iterate through the values of a column in a database table? I'm dealing with millions of rows so speed of execution is important.

    Read the article

  • Fetch Subject & sender in IMAP

    - by shadyabhi
    After the following code:- import imaplib conn = imaplib.IMAP4("mail.daiict.ac.in") conn.login("200801076","mypass") # OUT: ('OK', ['Logged in.']) conn.select() # OUT: ('OK', ['166']) Now, how do I fetch the sender and subject of mails in the inbox?

    Read the article

  • [C# n MySQL] Creating a database using Connector/NET Programming?

    - by yeeen
    How to create a database using connector/net programming? How come the following doesn't work? string connStr = "server=localhost;user=root;port=3306;password=mysql;"; MySqlConnection conn = new MySqlConnection(connStr); MySqlCommand cmd; string s0; try { conn.Open(); s0 = "CREATE DATABASE IF NOT EXISTS `hello`;"; cmd = new MySqlCommand(s0, conn); conn.Close(); } catch (Exception e) { Console.WriteLine(e.ToString()); }

    Read the article

  • Python client / server question

    - by AustinM
    I'm working on a bit of a project in python. I have a client and a server. The server listens for connections and once a connection is received it waits for input from the client. The idea is that the client can connect to the server and execute system commands such as ls and cat. This is my server code: import sys, os, socket host = '' port = 50105 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((host, port)) print("Server started on port: ", port) s.listen(5) print("Server listening\n") conn, addr = s.accept() print 'New connection from ', addr while (1): rc = conn.recv(5) pipe = os.popen(rc) rl = pipe.readlines() file = conn.makefile('w', 0) file.writelines(rl[:-1]) file.close() conn.close() And this is my client code: import sys, socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host = 'localhost' port = input('Port: ') s.connect((host, port)) cmd = raw_input('$ ') s.send(cmd) file = s.makefile('r', 0) sys.stdout.writelines(file.readlines()) When I start the server I get the right output, saying the server is listening. But when I connect with my client and type a command the server exits with this error: Traceback (most recent call last): File "server.py", line 21, in <module> rc = conn.recv(2) File "/usr/lib/python2.6/socket.py", line 165, in _dummy raise error(EBADF, 'Bad file descriptor') socket.error: [Errno 9] Bad file descriptor On the client side, I get the output of ls but the server gets screwed up.

    Read the article

  • sqlobject: No connection has been defined for this thread or process

    - by Claudiu
    I'm using sqlobject in Python. I connect to the database with conn = connectionForURI(connStr) conn.makeConnection() This succeeds, and I can do queries on the connection: g_conn = conn.getConnection() cur = g_conn.cursor() cur.execute(query) res = cur.fetchall() This works as intended. However, I also defined some classes, e.g: class User(SQLObject): class sqlmeta: table = "gui_user" username = StringCol(length=16, alternateID=True) password = StringCol(length=16) balance = FloatCol(default=0) When I try to do a query using the class: User.selectBy(username="foo") I get an exception: ... File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\main.py", line 1371, in selectBy conn = connection or cls._connection File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\dbconnection.py", line 837, in __get__ return self.getConnection() File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\dbconnection.py", line 850, in getConnection "No connection has been defined for this thread " AttributeError: No connection has been defined for this thread or process How do I define a connection for a thread? I just realized I can pass in a connection keyword which I can give conn to to make it work, but how do I get it to work if I weren't to do that?

    Read the article

  • Problem with mysql_query() ;Says resource expected .

    - by user364651
    I have this php file. The lines marked as bold are showing up the error :"mysql_query() expecets parameter 2 to be a resources. Well, the similar syntax on the line on which I have commented 'No error??' is working just fine. function checkAnswer($answerEntered,$quesId) { //This functions checks whether answer to question having ques_id = $quesId is satisfied by $answerEntered or not $sql2="SELECT keywords FROM quiz1 WHERE ques_id=$quesId"; **$result2=mysql_query($sql2,$conn);** $keywords=explode(mysql_result($result2,0)); $matches=false; foreach($keywords as $currentKeyword) { if(strcasecmp($currentKeyword,$answerEntered)==0) { $matches=true; } } return $matches; } $sql="SELECT answers FROM user_info WHERE user_id = $_SESSION[user_id]"; $result=mysql_query($sql,$conn); // No error?? $answerText=mysql_result($result,0); //Retrieve answers entered by the user $answerText=str_replace('<','',$answerText); $answerText=str_replace('>',',',$answerText); $answerText=substr($answerText,0,(strlen($answerText)-1)); $answers=explode(",",$answerText); //Get the questions that have been assigned to the user. $sql1="SELECT questions FROM user_info WHERE user_id = $_SESSION[user_id]"; **$result1=mysql_query($sql1,$conn);** $quesIdList=mysql_result($result1,0); $quesIdList=substr($quesIdList,0,(strlen($quesIdList)-1)); $quesIdArray=explode(",",$quesIdList); $reportCard=""; $i=0; foreach($quesIdArray as $currentQuesId) { $answerEnteredByUser=$answers[$i]; if(checkAnswer($answerEnteredByUser,$currentQuesId)) { $reportCard=$reportCard+"1"; } else { $reportCard=$reportCard+"0"; } $i++; } echo $reportCard; ?> Here is the file connect.php. It is working just fine for other PHP documents. <?php $conn= mysql_connect("localhost","root","password"); mysql_select_db("quiz",$conn); ?>

    Read the article

  • MySQL Query Problem

    - by user559744
    Hello guys, I'm getting following error message. Please help me. Thanks Notice: Trying to get property of non-object in C:\xampp\htdocs\my\include\user_functions.php on line 34 Here is my Code $conn = db_connection(); if($conn == false) { user_error('Unable to connect to database'); return false; } $query = "UPDATE user SET passwd = '".$new_passwd."' WHERE username = '".$username."' "; $result=$conn->query($query); if($result == false) { user_error('Query Error'.$conn->error); return false; } if($result->num_rows == 1) { echo 'Password changed'; } else { echo 'Failed '; } here is my db_connection function db_connection() { $db = new mysqli('localhost','root','','php_login'); if(!$db) { echo 'Could not connect to database server'; } else { return $db; } }

    Read the article

  • Ubuntu 14.04, OpenLDAP TLS problems

    - by larsemil
    So i have set up an openldap server using this guide here. It worked fine. But as i want to use sssd i also need TLS to be working for ldap. So i looked into and followed the TLS part of the guide. And i never got any errors and slapd started fine again. BUT. It does not seem to work when i try to use ldap over tls. root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se ldap_start_tls: Protocol error (2) additional info: unsupported extended operation Ganking up the debug level some notches returns some more information: root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se -d 5 ldap_url_parse_ext(ldap://83.209.243.253) ldap_create ldap_url_parse_ext(ldap://83.209.243.253:389/??base) ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 83.209.243.253:389 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 83.209.243.253:389 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({) ber: ber_flush2: 31 bytes to sd 3 ldap_result ld 0x7f25df51e220 msgid 1 wait4msg ld 0x7f25df51e220 msgid 1 (infinite timeout) wait4msg continue ld 0x7f25df51e220 msgid 1 all 1 ** ld 0x7f25df51e220 Connections: * host: 83.209.243.253 port: 389 (default) refcnt: 2 status: Connected last used: Fri Jun 6 08:52:16 2014 ** ld 0x7f25df51e220 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f25df51e220 request count 1 (abandoned 0) ** ld 0x7f25df51e220 Response Queue: Empty ld 0x7f25df51e220 response count 0 ldap_chkResponseList ld 0x7f25df51e220 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f25df51e220 NULL ldap_int_select read1msg: ld 0x7f25df51e220 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 42 contents: read1msg: ld 0x7f25df51e220 msgid 1 message type extended-result ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f25df51e220 0 new referrals read1msg: mark request completed, ld 0x7f25df51e220 msgid 1 request done: ld 0x7f25df51e220 msgid 1 res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_extended_result ber_scanf fmt ({eAA) ber: ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_err2string ldap_start_tls: Protocol error (2) additional info: unsupported extended operation ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 3 ldap_free_connection: actually freed So no good information there neither. In /var/log/syslog i get: Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 ACCEPT from IP=83.209.243.253:56440 (IP=0.0.0.0:389) Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 EXT oid=1.3.6.1.4.1.1466.20037 Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 RESULT tag=120 err=2 text=unsupported extended operation Jun 6 08:55:42 master slapd[21383]: conn=1008 op=1 UNBIND Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 closed If i portscan the host i get the following: Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 08:56 CEST Nmap scan report for h83-209-243-253.static.se.alltele.net (83.209.243.253) Host is up (0.0072s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 389/tcp open ldap 636/tcp open ldapssl But when i check certs root@master:~# openssl s_client -connect daladevelop.se:636 -showcerts -state CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:unknown state 140244859233952:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 317 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- And i feel like i am clearly out in deep water not knowing at all where to go from here. Anny hints appreciated on what to do or to get better debug logging... EDIT: This is my config slapcated from cn=config and it does not mention at all anything about TLS. I have inserted my certinfo.ldif: root@master:~# cat certinfo.ldif dn: cn=config add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem - add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/daladevelop_slapd_cert.pem - add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/daladevelop_slapd_key.pem and when doing that i only got this as an answer. root@master:~# sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f certinfo.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry "cn=config" So still no wiser.

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Learning PostgreSql: bulk loading data

    - by Alexander Kuznetsov
    In this post we shall start loading data in bulk. For better performance of inserts, we shall load data into a table without constraints and indexes. This sounds familiar. There is a bulk copy utility, and it is very easy to invoke from C#. The following code feeds the output from a T-SQL stored procedure into a PostgreSql table: using ( var pgTableTarget = new PgTableTarget ( PgConnString , "Data.MyPgTable" , GetColumns ())) using ( var conn = new SqlConnection ( connectionString )) { conn.Open...(read more)

    Read the article

  • Else statement to show connection successful [closed]

    - by Craig Smith
    I am trying to write a script to test a database connection, at the moment it will only display text if the connection doesn't work, I am stuck with trying to create an else statement to display "Connection Successful" if it works. Here's my code so far. Any help appreciated :) <? $conn = @mysql_connect("localhost", "root", ""); if (!$conn) { die("Connection failed: " .mysql_error()); } ?>

    Read the article

  • Multiple synonym dictionary matches in PostgreSQL full text searching

    - by Ryan VanMiddlesworth
    I am trying to do full text searching in PostgreSQL 8.3. It worked splendidly, so I added in synonym matching (e.g. 'bob' == 'robert') using a synonym dictionary. That works great too. But I've noticed that it apparently only allows a word to have one synonym. That is, 'al' cannot be 'albert' and 'allen'. Is this correct? Is there any way to have multiple dictionary matches in a PostgreSQL synonym dictionary? For reference, here is my sample dictionary file: bob robert bobby robert al alan al albert al allen And the SQL that creates the full text search config: CREATE TEXT SEARCH DICTIONARY nickname (TEMPLATE = synonym, SYNONYMS = nickname); CREATE TEXT SEARCH CONFIGURATION dxp_name (COPY = simple); ALTER TEXT SEARCH CONFIGURATION dxp_name ALTER MAPPING FOR asciiword WITH nickname, simple; What am I doing wrong? Thanks!

    Read the article

  • How to configure a Firebird Database to run in memory

    - by Robert
    I'm running a software called Fishbowl inventory and it is running on a firebird database (Windows server 2003) at this time the fishbowl software is running extremely slow when more then one user accesses the software. I'm thinking I maybe able to speed up the application by forcing the database to run "In Memory". However I can not find documentation on how to do this. Any help would be greatly appreciated. Thank you in advance. Robert

    Read the article

  • Using ListViews on the Android?

    - by Robert
    Hey everyone, I am just getting started with the Android SDK and I had a quick question. I am trying to set up a ListView with a rectangle of color on the left and then a bit of text for each row. I also want to make it so I can click each entry in the list and open a new activity to display some information (similar to the contact list). Anyone have any examples to help me out? Thanks a ton, Robert Hill

    Read the article

  • Embed resource in .NET Assembly without assembly prefix?

    - by Robert Fraser
    Hi all, When you embed a reosurce into a .NET assembly using Visual Studio, it is prefixed with the assembly name. However, assemblies can have embedded resources that are not assembly-name-prefixed. The only way I can see to do this is to disassemble the assembly using ildasm, then re-assemble it, adding the new resource -- which works, but... do I really need to finish that sentence? (Desktop .NET Framework 3.5, VS 2008 SP1, C#, Win7 Enterprise x64) Thanks, All the best, Robert

    Read the article

  • How can I import code from LaunchPad.net?

    - by Robert A Henru
    Hi all, I want to import code from Launchpad.net. How can I do it? Is that using SVN? Can I use SVN to keep updated with the code changes? Thank you so much, Robert FYI: this is the code I want to import https://code.launchpad.net/~openerp-commiter/openobject-addons/trunk-extra-addons

    Read the article

  • process tree

    - by Robert
    I'm looking for an easy way to find the process tree (as shown by tools like Process Explorer), in C# or other .Net language. It would also be useful to find the command-line arguments of another process (the StartInfo on System.Diagnostics.Process seems invalid for process other than the current process). I think these things can only be done by invoking the win32 api, but I'd be happy to be proved wrong. Thanks! Robert

    Read the article

  • Hiding Group Column Names

    - by Robert
    You once replied to a post about hiding list group header names. http://edinkapic.blogspot.com/2008/06/hiding-list-view-group-headers.html I do not write code or jQuery at that. But you mentioned that it would be better to write a solution in jQuery. Would you have code that would hide the group header and colon in a 2003 list (SP v.2)? Do you have any good leads? Thanks. Robert S.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >