Search Results

Search found 30217 results on 1209 pages for 'database'.

Page 653/1209 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter)

    - by Dennis Williamson
    In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach? Here is a greatly simplified and shortened data structure: list = [ {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'}, {'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'}, {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'}, {'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'}, {'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'} ] Here is the for loop version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True key1 = key2 = 0 for dictionary in list: if dictionary["key1"]: key1 += 1 if dictionary["key2"]: key2 += 1 print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above: Counts: key1: 3, subset key2: 2 Here is the other, perhaps more Pythonic, version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True from operator import itemgetter KEY1 = 0 KEY2 = 1 getentries = itemgetter("key1", "key2") entries = map(getentries, list) key1 = len([x for x in entries if x[KEY1]]) key2 = len([x for x in entries if x[KEY1] and x[KEY2]]) print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above (same as before): Counts: key1: 3, subset key2: 2 I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple. One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available. I have no control over the original form of the data. The code above is not going for style points.

    Read the article

  • JPA 2.0 Provider Hibernate

    - by Rooh
    I have very strange problem we are using jpa 2.0 with hibernate annotations based Database generated through JPA DDL is true and MySQL as Database; i will provide some reference classes and then my porblem. @MappedSuperclass public abstract class Common implements serializable{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false) private Long id; @ManyToOne @JoinColumn private Address address; //with all getter and setters //as well equal and hashCode } @Entity public class Parent extends Common{ private String name; @OneToMany(cascade = {CascadeType.MERGE,CascadeType.PERSIST}, mappedBy = "parent") private List<Child> child; //setters and rest of class } @Entity public class Child extends Common{ //some properties with getter/setters } @Entity public class Address implements Serializable{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false) private Long id; private String street; //rest of class with get/setter } as in code you can see that parents and child classes extends Common class so both have address property and id , the problem occurs when change the address refference in parent class it reflect same change in all child objects in list and if change address refference in child class then on merge it will change address refference of parent as well i am not able to figure out is it is problem of jpa or hibernate

    Read the article

  • Applying fine-grained security to an existing application

    - by Mark
    I've inherited a reasonably large and complex ASP.NET MVC3 web application using EF Code First on SQL Server. It uses ASP.NET Membership roles with database authentication. The controller actions are secured with attributes derived from AuthorizeAttribute that map roles to actions. There are extension methods for the finer points, such as showing a particular widget to particular roles. This is works great and I have a good understanding of the current security model. I've been asked to provide finer grained security at the data level. For example a 'Customer' user can only see data (throughout the database) associated with themselves and not other Customers. The problem is that 'Customer' is only 1 of 5 different types with their own specific restrictions (each of the 9 roles is one of these 5 types). The best thing I can think of is to go through all the data repositories and extend each and every LINQ statements/query with a filter for every user type. Even if I had time for that it doesn't seem like the most elegant way. Any suggestions? I really don't know where to start with this so anything could be helpful. Many thanks.

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • Wnat is the preferred method of building extremely lightweight business object / DAL now that I have

    - by Seth Spearman
    Hello, I have completed a simple database for a project. Only 6tables. Of the 6, one is a "lookup" table. There is one "master" table that is the driver for the system. It is referenced as a foreign key by the other four tables. Give that this step is completed. What is the FASTEST, EASIEST way to create POCOs/BizObjects that can load load the data and the child data. Here are my CAVEATS. *I don't want to spend more than 30-60 minutes learning how? *There is very little biz logic needed in the POCOs. They will pretty much load data. Don't even really need to write back data. *I already know CSLA (up to version 3) but I feel that is overkill for this little project. *Nevertheless, I would love it if it ROOT objects could have collection classes that contain the CHILD objects as in CSLA...but again, without using CSLA. *Please give the answer for .NET 35 but also if I was restricted to only use .NET 20. *Ideally I could just point a tool at the database and the POCOs would be genn'ed. *FREE Just curious what you guys use for this kind of scenario. I understand that this question is subjective but I want to hear a variety of answers. Seth

    Read the article

  • No Buffer Space available(maximum connection reached?) Form Postgres EDB Driver

    - by Listening.Platform
    We are facing an exception while connecting to database through our java application. The stack trace is as follows com.edb.util.PSQLException: The connection attempt failed. at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:189) at com.edb.core.ConnectionFactory.openConnection(ConnectionFactory.java:64) at com.edb.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:161) at com.edb.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) at com.edb.jdbc3.Jdbc3Connection.<init>(Jdbc3Connection.java:24) at com.edb.Driver.makeConnection(Driver.java:391) at com.edb.Driver.connect(Driver.java:266) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) ... 12 more Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at com.edb.core.PGStream.<init>(PGStream.java:70) at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:115) ... 20 more When the error occured we were not able to connect to internet and DB and had to reboot the system. But the error occured again after 3 days at same code i.e while connecting to DB. We checked TCP connections using netstat. But there were not many TCP connections i.e it has not reached the max limit. Our application has multiple long running Java processes that pools the DB connections (not more than 60) and keeps it alive for firing the next query (as it has to poll the DB every 2 seconds). Some of the queries in our application are joining large tables (10 million records) to get the related data. We are using following System and applications Windows 2003 server SP2 Java 1.6 Postgres Plus Advanced server 8.4 Database edb-jdbc14.jar driver for connection DB from Java We have used the default configuration of Postgres DB except increasing the connection to 120 from 100. Has anybody encountred the same error with postgres edb driver? Can anybody help us finding the solution?

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • Classic ASP application-wide initializations and object caching

    - by slack3r
    In classic ASP (which I am forced to use), I have a few factory functions, that is, functions that return classes. I use JScript. In one include file I use these factory functions to create some classes that are used throughout the application. This include file is included with the #include directive in all pages. These factory functions do some "heavy lifting" and I don't want them to be executed on every page load. So, to make this clear I have something like this: // factory.inc function make_class(arg1, arg2) { function klass() { //... } // ... Some heavy stuff return klass; } // init.inc, included everywhere <!-- #include FILE="factory.inc" --> // ... MyClass1 = make_class(myarg01, myarg02); MyClass2 = make_class(myarg11, myarg12); //... How can I achieve the same effect without calling make_class on every page load? I know that I can't cache the classes in the Application object I can't use the Application_OnStart hook in Global.asa I could probably create a scripting component, but I really don't want to do that So, is there something else I can do? Maybe some way to achieve caching of these classes, which are really objects in JScript. PS: [further clarification] In the above code "heavy stuff" is not so heavy, but I just want to know if there's a way to avoid it being executed all the time. It reads database meta information, builds a table of the primary keys in the database and another table that resolves strings to classes, etc.

    Read the article

  • Using javascript and php together

    - by EmmyS
    I have a PHP form that needs some very simple validation on submit. I'd rather do the validation client-side, as there's quite a bit of server-side validation that happens to deal with writing form values to a database. So I just want to call a javascript function onsubmit to compare values in two password fields. This is what I've got: function validate(form){ var password = form.password.value; var password2 = form.password2.value; alert("password:"+password+" password2:" + password2); if (password != password2) { alert("not equal"); document.getElementByID("passwordError").style.display="inline"; return false; } alert("equal"); return true; } The idea being that a default-hidden div containing an error message would be displayed if the two passwords don't match. The alerts are just to display the values of password and password2, and then again to indicate whether they match or not (will not be used in production code). I'm using an input type=submit button, and calling the function in the form tag: <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post" onsubmit="return validate(this);"> Everything is alerting as expected when entering non-matching values. I would have hoped (and assumed, based on past use) that if the function returned false, the actual submit would not occur. And yet, it is. I'm testing by entering non-matching values in the password fields, and the alerts clearly show me the values and the not equal result, but the actual form action is still occurring and it's trying to write to my database. I'm pretty new at PHP; is there something about it that will not let me combine with javascript this way? Would it be better to use an input type=button and include submit() in the function itself if it returns true?

    Read the article

  • how i can retrive files from folder on hard-disk and how to display uplaoded file data into a textar

    - by Deepak Narwal
    I have made a application form in which i am asking for username,password,email id and user's resume.Now after uploading resume i am storing it into hard disk into htdocs/uploadedfiles/..in a format something like this username_filename.In database i am storing file name,file size,file type.Some coading for this i am showing here $filesize=$_FILES['file']['size']; $filename=$_FILES['file']['name']; $filetype=$_FILES['file']['type']; $temp_name=$_FILES['file']['tmp_name']; //temporary name of uploaded file $pwd_hash = hash('sha1',$_POST['password']); $target_path = "uploadedfiles/"; $target_path = $target_path.$_POST['username']."_".basename( $_FILES['file']['name']); move_uploaded_file($_FILES['file']['tmp_name'], $target_path) ; $sql="insert into employee values ('NULL','{$_POST[username]}','{$pwd_hash}','{$filename}','{$filetype}','$filesize',NOW())"; Now i have two questions 1.NOw how i can display this file data into a textarea(something like naukri.com resume section) 2.How one can retrive that resume file from folder on hard-disk.What query should i write to fetch this file from that folder.I know how to retrive data from database but i dont know how to retrive data from a folder in hard-disk like in the case if user want to delete this file or he wnat to download this file.How i can do this

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • Extract data from PostgreSQL DB without using pg_dump

    - by John Horton
    There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks!

    Read the article

  • From Sinatra Base object. Get port of application including the base object

    - by Poul
    I have a Sinatra::Base object that I would like to include in all of my web apps. In that base class I have the configure method which is called on start-up. I would like that configure code to 'register' that service with a centralized database. The information that needs to be sent when registering is the information on how to contact this web-service... things like host and port. I then plan on having a monitoring service that will spin over all registered services and occasionally ping them to make sure they are still up and running. In the configure method I am having trouble getting the port information. The 'self.settings.port' variable doesn't seem to work in this method. a) any ideas on how to get the port? I have the host. b) is there a sinatra plug-in that already does something like this so I don't have to write it myself? :-) //in my Sinatra::Base code. lets call it register_me.rb RegisterMe < Sinatra::Base configure do //save host and port information to database end get '/check_status' //return status end //in my web service code require register_me //at this point, sinatra will initialize the RegisterMe object and call configure post ('/blah') //example of a method for this particular web service end

    Read the article

  • NHibernate correct way to reattach cached entity to different session

    - by Chris Marisic
    I'm using NHibernate to query a list of objects from my database. After I get the list of objects over them I iterate over the list of objects and apply a distance approximation algorithm to find the nearest object. I consider this function of getting the list of objects and apply the algorithm over them to be a heavy operation so I cache the object which I find from the algorithm in HttpRuntime.Cache. After this point whenever I'm given the supplied input again I can just directly pull the object from Cache instead of having to hit the database and traverse the list. My object is a complex object that has collections attached to it, inside the query where I return the full list of objects I don't bring back any of the sub collections eagerly so when I read my cached object I need lazy loading to work correctly to be able to display the object fully. Originally I tried using this to re-associate my cached object back to a new session _session.Lock(obj, LockMode.None); However when accessing the page concurrently from another instance I get the error Illegal attempt to associate a collection with two open sessions I then tried something different with _session.Merge(obj); However watching the output of this in NHProf shows that it is deleting and re-associating my object's contained collections with my object, which is not what I want although it seems to work fine. What is the correct way to do this? Neither of these seem to be right.

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • BASH echo write mysql input

    - by jmituzas
    Have a bash menu where variables write to file for mysql input. heres what I have: echo "CREATE DATABASE '$mysqldbn'; #GRANT ALL PRIVILEGES ON *.* TO '$mysqlu'@'$myhost' IDENTIFIED BY '$mysqlup' WITH GRANT OPTION; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'$myhost' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'$myip' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'localhost' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES< LOCK TABLES on '$mysqldbn'.* TO '$mysqlu'@'$rip' IDENTIFIED BY '$mysqlup';" > nmysql.db mysql -u root -p$mypass < nmysql.db problem is to get variables to show I had to put them in single quotes, the single quotes show up as I want for instances like '$mysqlu'@'localhost'. But how can I remove the quotes and still get to use the variable in the instance like, CREATE DATABASE '$mysqldbn' ? Double quotes wont work either, I am at a loss. Thanks in advance, Joe

    Read the article

  • Barcode to Product Name Converter

    - by spagetticode
    Hi All, We are designing a mobile shopping system. The camera on the phone will read the barcode and then we have to convert the barcode to a standard product name in order to save it to our database. We are saving it to our database because we are connecting to a web service of local e-commerce sites to get their price about the related product. We are sending the product name to get the price from them, so that the user can see the prices, compare and buy. We cannot send barcode number to get the data from the e-commerce sites because some sites do not have the info of the barcode number. I have to somehow get the product name by only knowing the barcode. Google returns the result when barcode number is searched. But how am I going to parse the data? or how am I going to know which answer of google search best suits my input? Is there a site that sells barcode and product name data match? We are designing the system with C# Thanks alot.

    Read the article

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

  • php and mysql listing databases and looping through results

    - by Jacksta
    Beginner help needed :) I am doign an example form a php book which lists tables in databases. I am getting an error on line 36: $db_list .= "$table_list"; <?php //connect to database $connection = mysql_connect("localhost", "admin_cantsayno", "cantsayno") or die(mysql_error()); //list databases $dbs = @mysql_list_dbs($connection) or die(mysql_error()); //start first bullet list $db_list = "<ul>"; $db_num = 0; //loop through results of functions while ($db_num < mysql_num_rows($dbs)) { //get database names and make each a list point $db_names[$db_num] = mysql_tablename($dbs, $db_num); $db_list .= "<li>$db_names[$db_num]"; //get table names and make another list $tables = @mysql_list_tables($db_names[$db_num]) or die(mysql_error()); $table_list = "<ul>"; $table_num = 0; //loop through results of function while ($table_num < mysql_num_rows($tables)){ //get table names and make each bullet point $table_names[$table_num] = mysql_tablename($tables, $table_num); $table_list .= "<li>$table_names[$table_num]"; $table_num++; } //close inner bullet list and increment number to continue $table_list .= "</ul>" $db_list .= "$table_list"; $db_num++; } //close outer bullet list $db_list .= "</ul>"; ?> <html> <head> <title>MySQL Tables</title> </head> <body> <p><strong>Data bases and tables on local host</strong></p> <? echo "$db_list"; ?> </body>

    Read the article

  • Web service performing asynchronous call

    - by kornelijepetak
    I have a webservice method FetchNumber() that fetches a number from a database and then returns it to the caller. But just before it returns the number to the caller, it needs to send this number to another service so instantiates and runs the BackgroundWorker whose job is to send this number to another service. public class FetchingNumberService : System.Web.Services.WebService { [WebMethod] public int FetchNumber() { int value = Database.GetNumber(); AsyncUpdateNumber async = new AsyncUpdateNumber(value); return value; } } public class AsyncUpdateNumber { public AsyncUpdateNumber(int number) { sendingNumber = number; worker = new BackgroundWorker(); worker.DoWork += asynchronousCall; worker.RunWorkerAsync(); } private void asynchronousCall(object sender, DoWorkEventArgs e) { // Sending a number to a service (which is Synchronous) here } private int sendingNumber; private BackgroundWorker worker; } I don't want to block the web service (FetchNumber()) while sending this number to another service, because it can take a long time and the caller does not care about sending the number to another service. Caller expects this to return as soon as possible. FetchNumber() makes the background worker and runs it, then finishes (while worker is running in the background thread). I don't need any progress report or return value from the background worker. It's more of a fire-and-forget concept. My question is this. Since the web service object is instantiated per method call, what happens when the called method (FetchNumber() in this case) is finished, while the background worker it instatiated and ran is still running? What happens to the background thread? When does GC collect the service object? Does this prevent the background thread from executing correctly to the end? Are there any other side-effects on the background thread? Thanks for any input.

    Read the article

  • How can I return the number of rows affected in sqlplus to a shell script?

    - by jessica
    Here is my shell script: # Deletes data from the 'sample' table starting August 30, 2011. # This is done in stages with a 7 second break every # 2 seconds or so to free up the database for other users. # The message "Done." will be printed when there are # no database entries left to delete. user="*****" pass="*****" while(true); do starttime=`date +%s` while [[ $((`date +%s` - $starttime)) -lt 2 ]]; do sqlplus $user/$pass@//blabla <<EOF whenever sqlerror exit 1 delete from sample where sampletime >= to_date('08-30-2011','mm-dd-yyyy') and rownum <= 2; commit; EOF rows = ??? if [ $rows -eq 0 ] ; then echo "Done." exit 0; fi done sleep 7 done If there is no way to get the number of rows, maybe I can use an error code returned by sqlplus to figure out when to end the script? Any thoughts? Thank you!

    Read the article

  • Why does a change of Session State provider lead to an ASPx page yielding garbage?

    - by Rory Becker
    I have an aspnet webapp which has worked very well up until now. I was recently asked to explore ways of making it scale better. I found that seperation of database and Webapp would help. Further I was told that if I changed my session providing mechanism to SQLServer, I would be able to duplicate the Web Stack to several machines which could each call back to the state server allowing the load to be distirbuted better. This sounds logical. So I created an ASPState database using ASPNet_RegSQL.exe as detailed in many locations across the web and changed the web.config on my app from: <sessionState mode="InProc" cookieless="false" timeout="20" /> To: <sessionState mode="SQLServer" sqlConnectionString="Server=SomeSQLServer;user=SomeUser;password=SomePassword" cookieless="false" timeout="20" /> Then I addressed my app, which presented me with its logon screen and I duly logged in. Once in I was presented, not with the page I was expecting, but with: I can change the sessionstate back and forth. This problem goes away and then comes back based on which set of configuration I use. Why is this happening?

    Read the article

  • How should I pass the translated text to my object in my multilingual application?

    - by boatingcow
    Up until now, I have maintained a 'dictionary' table in my database, for example: +-----------+---------------------------------------+-----------------------------------------------+--------+ | phrase | en | fr | etc... | +-----------+---------------------------------------+-----------------------------------------------+--------+ | generated | Generated in %1$01.2f seconds at %2$s | Créée en %1$01.2f secondes à %2$s aujourd'hui | ... | | submit | Submit... | Envoyer... | ... | +-----------+---------------------------------------+-----------------------------------------------+--------+ I'll then select all rows from the database for the column that matches the locale we're interested in (or read the cache from a file to speed db lookup) and dump the dictionary into an array called $lng. Then I'll have HTML helper objects like this in my view: $html->input(array('type' => 'submit', 'value' => $lng['submit'], etc...)); ... $html->div(array('value' => sprintf($lng['generated'], $generated, date('H:i')), etc...)); The translations can appear in PDF, XLS and AJAX responses too. The problem with my approach so far is that I now have loads of global $lng; in every class where there is a function that spits out UI code.. How do other people get the translation into the object? Is it one scenario where globals aren't actually that bad? Would it be madness to create a class with accessors when the dictionary terms are all static?

    Read the article

  • How can I share an entity framework model across website users

    - by richardmoss
    Hello, Currently my website is based around MVC and the Entity Framework running against a SQL Server 2005 database. So far, it has all been running very smoothly, and I really enjoy MVC and its slimmer more concise code (and no huge viewstates or soul destroying postbacks ;)) Recently I was working on upgrading the site to use a simple forum system, and this is where I started running into problems. When I was testing the site using two different browsers, if I created or replied to a post in one browser, the other browser couldn't see the post. At the moment, each visitor to the site gets their own copy of the entity model, which I store in their session data. Obviously this is the problem as updates to one model aren't getting carried to the other. As a test, I tried storing a single copy of the model which all visitors would access by assigning the model to a static variable. This worked, and both browsers could see each others modifications. However, it had its side effects. For example, if I fired up both browsers at the same time and the model was initialized, one browser would crash, and the other would work fine, despite me using a locking object so in theory one of them should have been delayed until the model was ready (of course I could have implemented this wrong ;)). Also, originally this site did use one model for all visitors and when it was live, it frequently shut down - killing the IIS application pool while it did. Now I'm not sure if this was related, but I don't really want to reintroduce whatever bug I had that caused this shut down. So, my question is a simple one really - what is the best way of either using the same model for all website users so they all see updates, or if they do have separate copies (which I imagine will have a performance impact in time) how can the models detect changes in the database and update themselves according. Thanks in advance for any advice! Regards; Richard Moss

    Read the article

  • MySQL SELECT combining 3 SELECTs INTO 1

    - by Martin Tóth
    Consider following tables in MySQL database: entries: creator_id INT entry TEXT is_expired BOOL other: creator_id INT entry TEXT userdata: creator_id INT name VARCHAR etc... In entries and other, there can be multiple entries by 1 creator. userdata table is read only for me (placed in other database). I'd like to achieve a following SELECT result: +------------+---------+---------+-------+ | creator_id | entries | expired | other | +------------+---------+---------+-------+ | 10951 | 59 | 55 | 39 | | 70887 | 41 | 34 | 108 | | 88309 | 38 | 20 | 102 | | 94732 | 0 | 0 | 86 | ... where entries is equal to SELECT COUNT(entry) FROM entries GROUP BY creator_id, expired is equal to SELECT COUNT(entry) FROM entries WHERE is_expired = 0 GROUP BY creator_id and other is equal to SELECT COUNT(entry) FROM other GROUP BY creator_id. I need this structure because after doing this SELECT, I need to look for user data in the "userdata" table, which I planned to do with INNER JOIN and select desired columns. I solved this problem with selecting "NULL" into column which does not apply for given SELECT: SELECT creator_id, COUNT(any_entry) as entries, COUNT(expired_entry) as expired, COUNT(other_entry) as other FROM ( SELECT creator_id, entry AS any_entry, NULL AS expired_entry, NULL AS other_enry FROM entries UNION SELECT creator_id, NULL AS any_entry, entry AS expired_entry, NULL AS other_enry FROM entries WHERE is_expired = 1 UNION SELECT creator_id, NULL AS any_entry, NULL AS expired_entry, entry AS other_enry FROM other ) AS tTemp GROUP BY creator_id ORDER BY entries DESC, expired DESC, other DESC ; I've left out the INNER JOIN and selecting other columns from userdata table on purpose (my question being about combining 3 SELECTs into 1). Is my idea valid? = Am I trying to use the right "construction" for this? Are these kind of SELECTs possible without creating an "empty" column? (some kind of JOIN) Should I do it "outside the DB": make 3 SELECTs, make some order in it (let's say python lists/dicts) and then do the additional SELECTs for userdata? Solution for a similar question does not return rows where entries and expired are 0. Thank you for your time.

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >