Search Results

Search found 34465 results on 1379 pages for 'database permissions'.

Page 626/1379 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • Should I be backing up a webapp's data to another host continuously ?

    - by user196289
    I have webapp in development. I need to plan for what happens if the host goes down. I will lose some very recent session status (which I can live with) and everything else should be persistently stored in the database. If I am starting up again after an outage, can I expect a good host to reconstruct the database to within minutes of where I was up to ? Or seconds ? Or should I build in a background process to continually mirror the database elsewhere ? What is normal / sensible ? Obviously a good host will have RAID and other redundancy so the likelihood of total loss should be low, and if they have periodic backups I should lose only very recent stuff but this is presumably likely to be designed with almost static web content in mind, and my site is transactional with new data being filed continuously (with a customer expectation that I don't ever lose it). Any suggestions / advice ? Are there off the shelf frameworks for doing this ? (I'm primarily working in Java) And should I just plan to save the data or should I plan to have an alternative usable host implementation ready to launch in case the host doesn't come back up in a suitable timeframe ?

    Read the article

  • Pass command line arguments to JUnit test case being run programmatically

    - by __nv__
    I am attempting to run a JUnit Test from a Java Class with: JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.run(classToRun); Problem is my JUnit test requires a database connection that is currently hardcoded in the JUnit test itself. What I am looking for is a way to run the JUnit test programmatically(above) but pass a database connection to it that I create in my Java Class that runs the test, and not hardcoded within the JUnit class. Basically something like JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.addParameters(java.sql.Connection); core.run(classToRun); Then within the classToRun: @Test Public void Test1(Connection dbConnection){ Statement st = dbConnection.createStatement(); ResultSet rs = st.executeQuery("select total from dual"); rs.next(); String myTotal = rs.getString("TOTAL"); //btw my tests are selenium testcases:) selenium.isTextPresent(myTotal); } I know about The @Parameters, but it doesn't seem applicable here as it is more for running the same test case multiple times with differing values. I want all of my test cases to share a database connection that I pass in through a configuration file to my java client that then runs those test cases (also passed in through the configuration file). Is this possible? P.S. I understand this seems like an odd way of doing things.

    Read the article

  • How to use different providers for Linq to entities?

    - by Anders Svensson
    I'm trying to familiarize myself a bit more with database programming, and I'm looking at different ways of creating a data access layer for applications. I've tried out a few ways but there is such a jungle of different database technologies that I don't know what to learn. For instance I've tried using datasets with tableadapters. Using that I am able to switch data provider rather easily (by programming against the interfaces such as IDbConnection). This is one thing I would want to achieve. But I also know everyone's talking about LINQ, and I'm trying to get to know that a bit better too. So I have tried using Linq to Sql classes as the data access layer as well, but apparently this is not provider independent (works only for SQL Server). So then I read about the Entity Framework (which just as Linq to SQL apparently has gotten its share of bashing already...). It's supposed to be provider independent everybody says, but how? I tried out a tutorial to create an entity data model, but the only providers to choose from were SQL Server/Express. Just for learning purposes, I would like to know how to use the entity framework with MS Access/OleDb. Also, I would appreciate some input on what is the preferred database technology for data access. Is it LINQ still after all the bashing, or should you just use datasets because they are provider independent? Any pointers for what to learn would be great, because it's just too much to learn it all if I'm not going to use it in the end...!

    Read the article

  • How to check whether data is stored in core data ?

    - by Warrior
    i am new to iphone development. I am trying to save a static values in to core data database.I want to check whether the data is stored to the database, i am not able to retrieve the data from the database, so i want to check myself whether i made mistake in retrieving the data or in storing the data itself. NSManagedObjectContext *context = [self managedObjectContext]; NSManagedObject *event = [NSEntityDescription insertNewObjectForEntityForName:@"Event" inManagedObjectContext:context]; [event setValue:fname forKey:@"firstname"]; [event setValue:lname forKey:@"lastname"]; i put the above code in the submit button of my formclass. when i restart the app i fetch the data in main view class using this code NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; NSError *error; NSArray *items = [self.managedObjectContext executeFetchRequest:fetchRequest error:&error]; for (Event *info in items) { NSLog(@"the selected value is : %@", [info valueForKey:@"firstname"]); } [fetchRequest release]; i dont know where i am going wrong.Please help me out.Thanks.

    Read the article

  • Automatically Persisting a Complex Java Object

    - by VeeArr
    For a project I am working on, I need to persist a number of POJOs to a database. The POJOs class definitions are sometimes highly nested, but they should flatten okay, as the nesting is tree-like and contains no cycles (and the base elements are eventually primitives/Strings). It is preferred that the solution used create one table per data type and that the tables will have one field per primitive member in the POJO. Subclassing and similar problems are not issues for this particular project. Does anybody know of any existing solutions that can: Automatically generate a CREATE TABLE definition from the class definition Automatically generate a query to persist an object to the database, given an instance of the object Automatically generate a query to retrieve an object from the database and return it as a POJO, given a key. Solutions that can do this with minimum modifications/annotions to the class files and minimum external configuration are preferred. Example: Java classes //Class to be persisted class TypeA { String guid; long timestamp; TypeB data1; TypeC data2; } class TypeB { int id; int someData; } class TypeC { int id; int otherData; } Could map to CREATE TABLE TypeA ( guid CHAR(255), timestamp BIGINT, data1_id INT, data1_someData INT, data2_id INt, data2_otherData INT ); Or something similar.

    Read the article

  • Python2.7: How can I speed up this bit of code (loop/lists/tuple optimization)?

    - by user89
    I repeat the following idiom again and again. I read from a large file (sometimes, up to 1.2 million records!) and store the output into an SQLite databse. Putting stuff into the SQLite DB seems to be fairly fast. def readerFunction(recordSize, recordFormat, connection, outputDirectory, outputFile, numObjects): insertString = "insert into NODE_DISP_INFO(node, analysis, timeStep, H1_translation, H2_translation, V_translation, H1_rotation, H2_rotation, V_rotation) values (?, ?, ?, ?, ?, ?, ?, ?, ?)" analysisNumber = int(outputPath[-3:]) outputFileObject = open(os.path.join(outputDirectory, outputFile), "rb") outputFileObject, numberOfRecordsInFileObject = determineNumberOfRecordsInFileObjectGivenRecordSize(recordSize, outputFileObject) numberOfRecordsPerObject = (numberOfRecordsInFileObject//numberOfObjects) loop1StartTime = time.time() for i in range(numberOfRecordsPerObject ): processedRecords = [] loop2StartTime = time.time() for j in range(numberOfObjects): fout = outputFileObject .read(recordSize) processedRecords.append(tuple([j+1, analysisNumber, i] + [x for x in list(struct.unpack(recordFormat, fout))])) loop2EndTime = time.time() print "Time taken to finish loop2: {}".format(loop2EndTime-loop2StartTime) dbInsertStartTime = time.time() connection.executemany(insertString, processedRecords) dbInsertEndTime = time.time() loop1EndTime = time.time() print "Time taken to finish loop1: {}".format(loop1EndTime-loop1StartTime) outputFileObject.close() print "Finished reading output file for analysis {}...".format(analysisNumber) When I run the code, it seems that "loop 2" and "inserting into the database" is where most execution time is spent. Average "loop 2" time is 0.003s, but it is run up to 50,000 times, in some analyses. The time spent putting stuff into the database is about the same: 0.004s. Currently, I am inserting into the database every time after loop2 finishes so that I don't have to deal with running out RAM. What could I do to speed up "loop 2"?

    Read the article

  • Performing a MYSQL query based off of $_GET results

    - by Michael N
    When a user clicks an item on my items page, it takes them to blank page template using $_GET to pass the item brand and model through. I'd like to perform another MYSQL query when that user clicks through to populate the blank page with the product details from my database. I'd like to retrieve the single row using the model number (unique ID) to populate the page with the information. I've tried a couple of things but am having a little difficulty. On my blank item page, I have $brand = $_GET['Brand']; $modelnumber = $_GET['ModelNumber']; $query = mysql_query("SELECT * FROM items WHERE `Model Number` = '$modelnumber'"); $results = mysql_fetch_row($query); echo $results; I think having ''s around Model Number is causing troubles, but without them, I get a Warning: mysql_fetch_row() expects parameter 1 to be resource, boolean given error. My database columns looks like Brand | Model Number | Price | Description | Image A few other things I have tried include $query = mysql_query("SELECT * FROM item WHERE Model Number = $_GET['ModelNumber']"); Which gave me a syntax error. I've also tried concatenating the $_GET which gives me a mysql_fetch_row() expects parameter 1 to be resource, boolean given error Which leads me to believe that I'm also going about displaying the results incorrectly. I'm not sure if I need to put it in a where loop like I have with my previous page which displays all items in the database because this is just displaying one.

    Read the article

  • Problem in data binding in NSString?

    - by Rajendra Bhole
    Hi, I selecting the row of the table. The text of the row i stored in the application delegate object as a NSString. That NSString i want to retrieving or binding in SELECT statement of SQLite query, For that i written code in the TableView Class (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { selectedText = [appDelegate.categoryArray objectAtIndex:indexPath.row]; appDelegate.selectedTextOfRow = selectedText; ListOfPrayersViewController *listVC = [[ListOfPrayersViewController alloc] init]; [self.navigationController pushViewController:listVC animated:YES]; [listVC release]; } and database class class code is. + (void) getDuas:(NSString *)dbPath{ if(sqlite3_open([dbPath UTF8String], &database) == SQLITE_OK){ SalahAppDelegate *appDelegate = (SalahAppDelegate *)[[UIApplication sharedApplication] delegate]; NSString *categoryTextForQuery =[NSString stringWithFormat:@"SELECT Category FROM Prayer WHERE Category ='%s'", appDelegate.selectedTextOfRow ]; NSLog(@"The Text %@", categoryTextForQuery); //const char *sqlQuery1 = (char *)categoryTextForQuery; //const char *sqlQuery = "SELECT Category FROM Prayer WHERE Category = 'Invocations for the beginning of the prayer'"; sqlite3_stmt *selectstmt; if(sqlite3_prepare_v2(database, [categoryTextForQuery UTF8String], -1, &selectstmt, NULL) == SQLITE_OK){ appDelegate.duasArray =[[NSMutableArray alloc] init]; while(sqlite3_step(selectstmt) == SQLITE_ROW){ NSString *dua = [[NSString alloc] initWithCString:(char *)sqlite3_column_text(selectstmt,0) encoding:NSASCIIStringEncoding]; Prayer *prayerObj = [[Prayer alloc] initwithDuas:dua]; prayerObj.DuaName = dua; [appDelegate.duasArray addObject:prayerObj]; } } } } The code is comes out of loop on the statement or starting the loop of the while(sqlite3_step(selectstmt) == SQLITE_ROW) Why? How i bind the table selected text in SELECT statement of sqlite?

    Read the article

  • Rendering ListBox takes too long on Windows Phone

    - by Bhawk1990
    I am working on a Windows Phone 7 Application using Local SQLite Database and I'm having an issue with the rendering time of pages that use DataBinding. Currently it takes 60-70ms to retrieve the data from the database. Then it takes about 3100ms to render the data retrieved using a ListBox with DataBinding. Here you can see the DataTemplate of the ListBox: <DataTemplate x:Key="ListBoxItemTemplate"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="68" /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock x:Name="TimeColumn" Text="{Binding TimeSpan}" Grid.Column="0" Grid.Row="0" Foreground="White" HorizontalAlignment="Center" VerticalAlignment="Center" /> <TextBlock Text="{Binding Stop.StopName}" Grid.Column="1" Grid.Row="0" Margin="15,0,0,0" TextWrapping="NoWrap" Foreground="Black" HorizontalAlignment="Left" VerticalAlignment="Center" /> </Grid> </DataTemplate> Comment: I have tried it using Canvas instead of Grid too, same result. Then, the database loads data into a CSList (using ViciCoolStorage) and that gets Binded to the ListBox: StationList.ItemsSource = App.RouteViewModel.RouteStops; Comment: I have tried to add the elements of the CSList to an ObservableCollection and bind that to the interface but didn't seem to change anything. Question: Am I doing something wrong that results in a huge load time - even if just loading 10 elements -, or this is normal? Do you have any recommendations to get a better performance with DataBinding? Thank you for your answers in advance!

    Read the article

  • I need help with a SQL query. Fetching an entry, it's most recent revision and it's fields.

    - by Tigger ate my dad
    Hi there, I'm building a CMS for my own needs, and finished planning my database layout. Basically I am abstracting all possible data-models into "sections" and all entries into one table. The final layout is as follows: Database diagram: I have yet to be allowed to post images, so here is a link to a diagram of my database. Entries (section_entries) are children of their section (sections). I save all edits to the entries in a new revision (section_entries_revisions), and also track revisions on the sections (section_revisions), in order to match the values of a revision, to the fields of the section that existed when the entry-revision was made. The section-revisions can have a number of fields (section_revision_fields) that define the attributes of entries in the section. There is a many-to-many relationship between the fields (section_revision_fields) and the entry-revisions (section_entry_revisions), that stores the values of the attributes defined by the section revision. Feel free to ask questions if the diagram is confusing. Now, this is the most complex SQL I've ever worked with, and the task of fetching my data is a little daunting. Basically what i want help with, is fetching an entry, when the only known variables are; section_id, section_entry_id. The query should fetch the most recent revision of that entry, and the section_revision model corresponding to section_revision_id in the section_entry_revisions table. It should also fetch the values of the fields in the section-revision. I was hoping for a query result, where there would be as many rows as fields in the section. Each row would contain the information of the entry and the section, and then information for one of the fields (e.g. each row corresponding to a field and it's value). I tried to explain the best I could. Again, feel free to ask questions if my description somehow lacking. I hope someone is up for the challenge. Best regards. :-)

    Read the article

  • Sqlite3 INSERT INTO Question × 377

    - by user316717
    Hi, My 1st post. I am creating an exercise app that will record the weight used and the number of "reps" the user did in 4 "Sets" per day over a period of 7 days so the user may view their progress. I have built the database table named FIELDS with 2 columns ROW and FIELD_DATA and I can use the code below to load the data into the db. But the code has a sql statement that says, INSERT OR REPLACE INTO FIELDS (ROW, FIELD_DATA)VALUES (%d, '%@'); When I change the statment to: INSERT INTO FIELDS (ROW, FIELD_DATA)VALUES (%d, '%@'); Nothing happens. That is no data is recorded in the db. Below is the code: #define kFilname @"StData.sqlite3" - (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:kFilname]; } -(IBAction)saveData:(id)sender; { for (int i = 1; i <= 8; i++) { NSString *fieldName = [[NSString alloc]initWithFormat:@"field%d", i]; UITextField *field = [self valueForKey:fieldName]; [fieldName release]; NSString *insert = [[NSString alloc] initWithFormat: @"INSERT OR REPLACE INTO FIELDS (ROW, FIELD_DATA) VALUES (%d, '%@');",i, field.text]; // sqlite3_stmt *stmt; char *errorMsg; if (sqlite3_exec (database, [insert UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { // NSAssert1(0, @"Error updating table: %s", errorMsg); sqlite3_free(errorMsg); } } sqlite3_close(database); } So how do I modify the code to do what I want? It seemed like a simple sql statement change at first but obviously there must be more. I am new to Objective-C and iPhone programming. I am not new to using sql statements as I have been creating web apps in ASP for a number of years. Any help will be greatly appreciated, this is driving me nuts! Thanks in advance Dave

    Read the article

  • Help with java threads or executors: Executing several MySQL selects, inserts and updates simmultane

    - by Martin
    Hi. I'm writing an application to analyse a MySQL database, and I need to execute several DMLs simmultaneously; for example: // In ResultSet rsA: Select * from A; rsA.beforeFirst(); while (rsA.next()) { id = rsA.getInt("id"); // Retrieve data from table B: Select * from B where B.Id=" + id; // Crunch some numbers using the data from B // Close resultset B } I'm declaring an array of data objects, each with its own Connection to the database, which in turn calls several methods for the data analysis. The problem is all threads use the same connection, thus all tasks throw exceptios: "Lock wait timeout exceeded; try restarting transaction" I believe there is a way to write the code in such a way that any given object has its own connection and executes the required tasks independent from any other object. For example: DataObject dataObject[0] = new DataObject(id[0]); DataObject dataObject[1] = new DataObject(id[1]); DataObject dataObject[2] = new DataObject(id[2]); ... DataObject dataObject[N] = new DataObject(id[N]); // The 'DataObject' class has its own connection to the database, // so each instance of the object should use its own connection. // It also has a "run" method, which contains all the tasks required. Executor ex = Executors.newFixedThreadPool(10); for(i=0;i<=N;i++) { ex.execute(dataObject[i]); } // Here where the problem is: Each instance creates a new connection, // but every DML from any of the objects is cluttered in just one connection // (in MySQL command line, "SHOW PROCESSLIST;" throws every connection, and all but // one are idle). Can you point me in the right direction? Thanks

    Read the article

  • Nullable values in C++

    - by DanDan
    I'm creating a database access layer in native C++, and I'm looking at ways to support NULL values. Here is what I have so far: class CNullValue { public: static CNullValue Null() { static CNullValue nv; return nv; } }; template<class T> class CNullableT { public: CNullableT(CNullValue &v) : m_Value(T()), m_IsNull(true) { } CNullableT(T value) : m_Value(value), m_IsNull(false) { } bool IsNull() { return m_IsNull; } T GetValue() { return m_Value; } private: T m_Value; bool m_IsNull; }; This is how I'll have to define functions: void StoredProc(int i, CNullableT<int> j) { ...connect to database ...if j.IsNull pass null to database etc } And I call it like this: sp.StoredProc(1, 2); or sp.StoredProc(3, CNullValue::Null()); I was just wondering if there was a better way than this. In particular I don't like the singleton-like object of CNullValue with the statics. I'd prefer to just do sp.StoredProc(3, CNullValue); or something similar. How do others solve this problem?

    Read the article

  • Best method to cache objects in PHP?

    - by Martin Bean
    Hi, I'm currently developing a large site that handles user registrations. A social networking website for argument's sake. However, I've noticed a lag in page loads and deciphered that it is the creation of objects on pages that's slowing things down. For example, I have a Member object, that when instantiated with an ID passed as a construct parameter, it queries the database for that members' row in the members database table. Not bad, but this is created each time a page is loaded; and called more than once when say, calling an array of that particular members' friends, as a new Member object is created for each friend. So on a single page I can have upwards of seven of the same object, but containing different properties. What I'm wanting to do is to find a way to reduce the database load, and to allow persist objects between page loads. For example, the logged in user's object to be created on login (which I can do) but then stored somewhere for retrieval so I don't have to keep re-creating the object between page loads. What is the best solution for this? I've had a look at Memcache, but with it being a third-party module I can't have the web host install it on this occasion. What are my alternatives, and/or best practices in my case? Thanks in advance.

    Read the article

  • Determining difference in timestamps for two values in the same MySQL table

    - by JayRizzo03
    I am relatively new to programming in PHP, so I apologize if this is a rather simple question. I have a MySQL database table called MachineReports that contains the following values: ReportNum(primary key, auto increment), MachineID and Timestamp Here is some example data: |ReportNum | MachineID | Timestamp | |1 | AD3203 | 2012-11-18 06:32:28| |2 | AD3203 | 2012-11-19 04:00:15| |3 | BC4300 | 2012-11-19 04:00:15| What I am attempting to do is find the difference in timestamps in seconds for each machine ID by iterating over each row set. I am getting stuck on the best way to do this, however. Here is the code I've written so far: <?php include '../dbconnect/dbconnect.php'; $machineID=[]; //Get a list of all MachineIDs in the database foreach($dbh->query('SELECT DISTINCT(MachineID) FROM MachineReports') as $row) { array_push($machineID, $row[0]); } for($i=0;$i<count($machineID);$i++){ foreach($dbh->query("SELECT MachineID FROM MachineReports WHERE MachineID='$machineID[$i]' ORDER BY MachineID") as $row) { //code to associate each machineID with two time stamps goes here } } ? This code just lists out the contents of the table row by row. My ultimate goal is to find the difference in timestamps for a certain MachineID. One of the things I've considered is using a multidimensional array in php - using the $machineID as the key and then storing the timestamp inside the array the key points to. However, I'm uncertain how to do that since my query parses row by row. I have quite a few questions. 1) Is this the most efficient way to be doing this? I suspect my database table design may not be the best. 2)What would be the best way to determine the difference in timestamps for a certain machineID? Even just a pointer to a topic that would prompt me to think about this in a different way would be helpful - I'm not afraid to do research. Thanks!

    Read the article

  • Is my approach for persistent login secure ?

    - by Jay
    I'm very much stuck with the reasonable secure approach to implement 'Remember me' feature in a login system. Here's my approach so far, Please advice me if it makes sense and is reasonably secure: Logging: User provides email and password to login (both are valid).. Get the user_id from DB Table Users by comparing provided email Generate 2 random numbers hashed strings: key1, key2 and store in cookies. In DB Table COOKIES, store key1, key2 along with user_id. To Check login: If key1 and key2 both cookies exist, validate both keys in DB Table COOKIES (if a row with key1, and key2 exists, user is logged). if cookie is valid, regenrate key2 and update it in cookie and also database. Why re-genrating key: Because if someone steals cookie and login with that cookie, it will be working only until the real user login. When the real user will login, the stolen cookie will become invalid. Right? Why do I need 2 keys: Because if i store user_id and single key in cookie and database, and the user want to remember the password on another browser, or computer, then the new key will be updated in database, so the user's cookie in earlier browser/PC will become invalid. User wont be able to remember password on more than one place. Thanks for your opinions.

    Read the article

  • '??' Not a valid unicode character, but in the unicode character set?

    - by Steve Cotner
    Short story: I can't get an entity like '𠂉' to store in a MySQL database, either by using a text field in a Ruby on Rails app (with default UTF-8 encoding) or by inputting it directly with a MySQL GUI app. As far as I can tell, all Chinese characters and radicals can be entered into the database without problem, but not these rarely typed 'character components.' The character mentioned above is unicode U+20089 and html entity &#131209; I can get it to display on the page by entering <html>&#131209;</html> and removing html escaping, but I would like to store it simply as the unicode character and keep the html escaping in place. There are many other Chinese 'components' (parts of full characters, generally consisting of 2 or 3 strokes) that cause the same problem. According to this page, the character mentioned is in the UTF-8 charset: http://www.fileformat.info/info/unicode/char/20089/charset_support.htm But on the neighboring '...20089/index.htm' page, there's an alert saying it's not a valid unicode character. For reference, that entity can be found in Mac OS X by searching through the character palette (international menu, "Show Character Palette"), searching by radical, and looking under the '?' radical. Apologies if this is too open-ended... can a character like this be stored in a UTF-8-based database? How is this character both supported and unsupported, both present in the character set and not valid?

    Read the article

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • Fedora 16 can connect to samba share using smbclient but not in nautilus 3.2.1

    - by Nathan Jones
    I have a machine running Ubuntu 11.10 Server acting as a Samba server to share my home directory. Everything works fine on my Windows 7 machine, but on my Fedora 16 laptop, if I use Nautilus to try to access the share using smb://192.168.0.8/nathan in the location bar, it just has the loading cursor and does nothing. It never shows any errors, nothing. Using smbclient works just fine, but I'd like to get it working in Nautilus. I know that there can be problems with SELinux and Samba, so I created a file called booleans.local that contains samba_enable_home_dirs=1. My smb.conf file looks like this: # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. pam password change = yes # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections map to guest = bad user ########## Domains ########### # Is this machine able to authenticate users. Both PDC and BDC # must have this setting enabled. If you are the BDC you must # change the 'domain master' setting to no # ; domain logons = yes # # The following setting only takes effect if 'domain logons' is set # It specifies the location of the user's profile directory # from the client point of view) # The following required a [profiles] share to be setup on the # samba server (see below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ########## Printing ########## # If you want to automatically load your printer list rather # than setting them up individually then you'll need this # load printers = yes # lpr(ng) printing. You may wish to override the location of the # printcap file ; printing = bsd ; printcap name = /etc/printcap # CUPS printing. See also the cupsaddsmb(8) manpage in the # cupsys-client package. ; printing = cups ; printcap name = cups ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Most people will find that this option gives better performance. # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html # for details # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 # socket options = TCP_NODELAY # The following parameter is useful only if you have the linpopup package # installed. The samba maintainer and the linpopup maintainer are # working to ease installation and configuration of linpopup and samba. ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' & # Domain Master specifies Samba to be the Domain Master Browser. If this # machine will be configured as a BDC (a secondary logon server), you # must set this to 'no'; otherwise, the default behavior is recommended. # domain master = auto # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap uid = 10000-20000 ; idmap gid = 10000-20000 ; template shell = /bin/bash # The following was the default behaviour in sarge, # but samba upstream reverted the default because it might induce # performance issues in large organizations. # See Debian bug #368251 for some of the consequences of *not* # having this setting and smb.conf(5) for details. ; winbind enum groups = yes ; winbind enum users = yes # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 (default) means that usershare is disabled. ; usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones usershare allow guests = yes #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home director as \\server\username [homes] comment = Home Directories browseable = yes # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. read only = no # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. ; create mask = 0775 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. ; directory mask = 0775 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. Un-comment the following parameter # to make sure that only "username" can connect to \\server\username # The following parameter makes sure that only "username" can connect # # This might need tweaking when using external authentication schemes valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) ;[netlogon] ; comment = Network Logon Service ; path = /home/samba/netlogon ; guest ok = yes ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on ;[profiles] ; comment = Users profiles ; path = /home/samba/profiles ; guest ok = no ; browseable = no ; create mask = 0600 ; directory mask = 0700 [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = no create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it ; write list = root, @lpadmin # A sample share for sharing your CD-ROM with others. ;[cdrom] ; comment = Samba server's CD-ROM ; read only = yes ; locking = no ; path = /cdrom ; guest ok = yes # The next two parameters show how to auto-mount a CD-ROM when the # cdrom share is accesed. For this to work /etc/fstab must contain # an entry like this: # # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0 # # The CD-ROM gets unmounted automatically after the connection to the # # If you don't want to use auto-mounting/unmounting make sure the CD # is mounted on /cdrom # ; preexec = /bin/mount /cdrom ; postexec = /bin/umount /cdrom smbusers: <nathan> = <"nathan"> Any help would be very much appreciated! Thanks!

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • 'Could not find RubyGem bundler', even though it is…

    - by ashleyw
    I'm running Ruby 1.9.2preview1 (via RVM), and Bundler 0.9.18. If I execute bundle install, it works fine until the point where I don't have the correct permissions to install a gem. However if I execute sudo bundle install, it spits out that it 'Could not find RubyGem bundler'. How can this be, why is the behaviour changing when using sudo? Thanks

    Read the article

  • C# remote web request certificate error

    - by Ben
    Hi. I am currently integrating a payment gateway into an application. Following a successful transaction the remote payment gateway posts details of the transaction back to my application (ITN). It posts to a HttpHandler that is used to read and validate the data. Part of the validation performed is a POST made by the handler to a validation service provided by the payment gateway. This effectively posts some of the original form values received back to the payment gateway to ensure they are valid. The url that I am posting back to is: "https://sandbox.payfast.co.za/eng/query/validate" and the code I am using: /// <summary> /// Posts the data back to the payment processor to validate the data received /// </summary> public static bool ValidateITNRequestData(NameValueCollection formVariables) { bool isValid = true; try { using (WebClient client = new WebClient()) { string validateUrl = (UseSandBox) ? SandboxValidateUrl : LiveValidateUrl; byte[] responseArray = client.UploadValues(validateUrl, "POST", formVariables); // get the resposne and replace the line breaks with spaces string result = Encoding.ASCII.GetString(responseArray); result = result.Replace("\r\n", " ").Replace("\r", "").Replace("\n", " "); if (result == null || !result.StartsWith("VALID")) { isValid = false; LogManager.InsertLog(LogTypeEnum.OrderError, "PayFast ITN validation failed", "The validation response was not valid."); } } } catch (Exception ex) { LogManager.InsertLog(LogTypeEnum.Unknown, "Unable to validate ITN data. Unknown exception", ex); isValid = false; } return isValid; } However, on calling WebClient.UploadValues the following exception is raised: System.Net.WebException: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure. at System.Net.Security.SslState.StartSendAuthResetSignal(ProtocolToken message, AsyncProtocolRequest asyncRequest, Exception exception) at For the sake of brevity I haven't listed the full call stack (I can do if anyone thinks it will help). The remote certificate does appear to be valid. To get around the problem I did try adding a new RemoteCertificateValidationCallback that always returned true but just ended up getting the following exception: System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_ServerCertificateValidationCallback(RemoteCertificateValidationCallback value) at NopSolutions.NopCommerce.Payment.Methods.PayFast.PayFastPaymentProcessor.ValidateITNRequestData(NameValueCollection formVariables) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The Zone of the assembly that failed was: MyComputer So I am not sure this will work in medium trust? Any help would be much appreciated. Thanks Ben

    Read the article

  • Is there a RAD for Android ?

    - by mawg
    Anything to help design GUI like a paint program? (Delphi, VB, MSVC, QtBuilder, etc) And anything to help build packages, set permissions, etc? What is there out there to take the drudge work out of Android app creation and leave me free to concentrate on design and development?

    Read the article

  • requestRouteToHost android

    - by Sam
    Greetings everyone! I'm getting rather fed up with android's ConnectivityManager class. I've been trying for 5 hours to get the requestRouteToHost to work. I'm running my code on the emulator but the requestRouteToHost always fails. I know I have connectivity because I called getActiveNetworkInfo() and it was connected. I've added the ACCESS_NETWORK_STATE and CHANGE_NETWORK_STATE permissions to no avail. Any tips would be greatly appreciated. Sam

    Read the article

  • Facebook Graph API: user gender

    - by Mark
    The old Facebook API provided the user sex/gender as part of the default user data. Apparently the new Graph API does not provide that information, even though the documentation says that it does. I've heard people say that you need to request special permissions to get it and other pieces of data, but I have not been successful in getting it to work. Does anyone have an example, using the Facebook Graph API, of how to get the user's gender and/or location (city/state/country/whatever)?

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >