Search Results

Search found 34962 results on 1399 pages for 'drop database'.

Page 338/1399 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Can I do transactions and locks in CouchDB?

    - by damian
    I need to do transactions (begin, commit or rollback), locks (select for update). How can I do it in a document model db? Edit: The case is this: I want to run an auctions site. And I think how to direct purchase as well. In a direct purchase I have to decrement the quantity field in the item record, but only if the quantity is greater than zero. That is why I need locks and transactions. I don't know how to address that without locks and/or transactions. Can I solve this with CouchDB?

    Read the article

  • mysql db connection

    - by Dragster
    hi there i have been searching the web for a connection between my android simulator and a mysql db. I've fount that you can't connect directly but via a webserver. The webserver wil handle my request from my android. I fount the following code on www.helloandroid.com But i don't understand. If i run this code on the simulator nothing happens. The screen stays black. Where does Log.i land. In the android screen or in the error log or somewhere else? Can somebody help me with this code? package app.android.ticket; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.apache.http.message.BasicNameValuePair; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class fetchData extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); //call the method to run the data retreival getServerData(); } public static final String KEY_121 = "http://www.jorisdek.nl/android/getAllPeopleBornAfter.php"; public fetchData() { Log.e("fetchData", "Initialized ServerLink "); } private void getServerData() { InputStream is = null; String result = ""; //the year data to send ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("year","1980")); //http post try{ HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost(KEY_121); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); }catch(Exception e){ Log.e("log_tag", "Error in http connection "+e.toString()); } //convert response to string try{ BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); result=sb.toString(); }catch(Exception e){ Log.e("log_tag", "Error converting result "+e.toString()); } //parse json data try{ JSONArray jArray = new JSONArray(result); for(int i=0;i<jArray.length();i++){ JSONObject json_data = jArray.getJSONObject(i); Log.i("log_tag","id: "+json_data.getInt("id")+ ", name: "+json_data.getString("name")+ ", sex: "+json_data.getInt("sex")+ ", birthyear: "+json_data.getInt("birthyear") ); } }catch(JSONException e){ Log.e("log_tag", "Error parsing data "+e.toString()); } } }

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • SQL Server Concatinate string column value to 5 char long

    - by mrp
    Scenario: I have a table1(col1 char(5)); A value in table1 may '001' or '01' or '1'. Requirement: Whatever value in col1, I need to retrive it in 5 char length concatenate with leading '0' to make it 5 char long. Technique I applied: select right(('00000' + col1),5) from table1; I didn't see any reason, why it doesn't work? but it didn't. Can anyone help me, how I can achieve the desired result?

    Read the article

  • How Can i Create This Complicated Query ?

    - by mTuran
    Hi, I have 3 tables: projects, skills and project_skills. In projects table i hold project's general data. Second table skills i hold skill id and skill name also i have projects_skills table which is hold project's skill relationships. Here is scheme of tables: CREATE TABLE IF NOT EXISTS `project_skills` ( `project_id` int(11) NOT NULL, `skill_id` int(11) NOT NULL, KEY `project_id` (`project_id`), KEY `skill_id` (`skill_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci; CREATE TABLE IF NOT EXISTS `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `employer_id` int(11) NOT NULL, `project_title` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `project_description` text COLLATE utf8_turkish_ci NOT NULL, `project_budget` int(11) NOT NULL, `project_allowedtime` int(11) NOT NULL, `project_deadline` datetime NOT NULL, `total_bids` int(11) NOT NULL, `average_bid` int(11) NOT NULL, `created` datetime NOT NULL, `active` tinyint(1) NOT NULL, PRIMARY KEY (`id`), KEY `created` (`created`), KEY `employer_id` (`employer_id`), KEY `active` (`active`), FULLTEXT KEY `project_title` (`project_title`,`project_description`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=3 ; CREATE TABLE IF NOT EXISTS `skills` ( `id` int(11) NOT NULL AUTO_INCREMENT, `category` int(11) NOT NULL, `name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `seo_name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `total_projects` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `seo_name` (`seo_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=224 ; I want to select projects with related skill names. I think i have to use JOIN but i don't know how can i do. Thanks

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • Cursor returns zero rows from query to table

    - by brockoli
    I've created an SQLiteDatabase in my app and populated it with some data. I can connect to my AVD with a terminal and when I issue select * from articles; I get a list of all the rows in my table and everything looks fine. However, in my code when I query my table, I get a cursor back that has my tables columns, but zero rows of data. Here is my code.. mDbHelper.open(); Cursor articles = mDbHelper.fetchAllArticles(); startManagingCursor(articles); Cursor feeds = mDbHelper.fetchAllFeeds(); startManagingCursor(feeds); mDbHelper.close(); int titleColumn = articles.getColumnIndex("title"); int feedIdColumn = articles.getColumnIndex("feed_id"); int feedTitleColumn = feeds.getColumnIndex("title"); /* Check if our result was valid. */ if (articles != null) { int count = articles.getCount(); /* Check if at least one Result was returned. */ if (articles.moveToFirst()) { In the above code, my Cursor articles returns with my 4 columns, but when I call getCount() it returns zero, even though I can see hundreds of rows of data in that table from command line. Any idea what I might be doing wrong here? Also.. here is my code for fetchAllArticles.. public Cursor fetchAllArticles() { return mDb.query(ARTICLES_TABLE, new String[] {ARTICLE_KEY_ROWID, ARTICLE_KEY_FEED_ID, ARTICLE_KEY_TITLE, ARTICLE_KEY_URL}, null, null, null, null, null); } Rob W.

    Read the article

  • Data scheme question

    - by Matt
    I am designing a data model for a local city page, more like requirements for it. So 4 tables: Country, State, City, neighbourhood. Real life relationships is: Country owns multiple State which owns multiple cities which ows multiple neighbourhoods. In the data model: Do we link these with FK the same way or link each with each? Like in each table there will even be a Country ID, State ID, CityID and NeighbourhoodID so each connected with each? Other wise to reach neighbourhood from country we need to join 2 other tables in between? There are more tables I need to maintain for IP addess of cities, latitude/longitude, etc.

    Read the article

  • MySQL: automatic rollback on transaction failure

    - by praksant
    Is there any way to set MySQL to rollback any transaction on first error/warning automatically? Now if everything goes well, it commits, but on failure it leaves transaction open and on another start of transaction it commits incomplete changes from failed transaction. (I'm executing queries from php, but i don't want to check in php for failure, as it would make more calls between mysql server and webserver.) Thank you

    Read the article

  • importing csv file into pgsql

    - by running4surival
    ok im trying to upload this csv file onto my table in pgsql but im getting this error ERROR: invalid input syntax for integer: "mlname,mfname,slname,sfname,address,postalcode,membershiptype,hphone,email" CONTEXT: COPY members2, line 1, column id: "mlname,mfname,slname,sfname,address,postalcode,membershiptype,hphone,email" i really understand why im getting this error, both my table and my csv file have the same column names

    Read the article

  • Postgresql 9.1: ERROR: type "citext" does not exist

    - by gotuskar
    I am trying to execute following query through PgAdmin utility. CREATE TABLE svcr."EventLogs" ("eventId" BIGINT NOT NULL, "eventTime" TIMESTAMP WITH TIME ZONE NOT NULL, "userid" CITEXT, "realmid" CITEXT NOT NULL, "onUserid" CITEXT, "description" TEXT, CONSTRAINT eventlogs_pkey PRIMARY KEY ("eventId")); And I get following error - ERROR: type "citext" does not exist SQL state: 42704 Character: 120 However, following query runs fine - CREATE TABLE svcr."CategoryMap" ("category" INT NOT NULL, "userData" INT NOT NULL); What is wrong with the first query?

    Read the article

  • Can somebody suggest good learning source of IMS?

    - by Raja Reddy
    I would like to learn working with IMS, can somebody suggest me a good source? I'm not sure if it matters to say that I have quite good exposure and experience with INSYNC DB2 and QMF. So anything that can depict and explain the advantages and disadvantages over IMS would be really helpful. Thanks for your help beforehand..

    Read the article

  • Datamapper Clone Record w/ New ID

    - by BouncePast
    class Item include DataMapper::Resource property :id, Serial property :title, String end item = Item.new(:title = 'Title 1') # :id = 1 item_clone = Item.first(:id = 1).clone item_clone.save This does "clone" the object as described but how can this be done so it applies a different ID once the record is saved, e.g. #

    Read the article

  • VB6. DataEnvironment: How to Configure/Set Trusted Connection?

    - by Mblua
    Hi, I have a DataEnvironment component in my VB 6 application. I cant configure the DataEnvironment connection to use trusted connection without ask me how to connect. I could set prompt to never appear, but it doesnt connect becouse it doesnt use trusted. In this link you could see the screens of DataEnvironment Connection Options and the prompt. Google Presentation Link I need this program to be executed from a D.O.S console without prompt anything. Thanks a lot!

    Read the article

  • When using a HiLo ID generation strategy, what types should be used to hold Ids?

    - by UpTheCreek
    I'm asking this from a c#/NHibnernate perspective, but it's generally applicable. The concern is that the HiLo strategy goes though id's pretty quickly, and for example a low record-count table (Such as Users) is sharing from the same set of id's as a high record-count table (Such as comments). So you can potentially get to high numbers quicker that with other strategies. So what do people recommend? Code side: int/uint/long/ulong? DBSide: int/bigint? My feeling is to go with longs and bigingts, but would like a sanity check :)

    Read the article

  • What is the most efficient procedure for implementing a sortable ajax list on the backend?

    - by HenryL
    The most common method is to assign a sequential order field for each item in the list and do an update that maintains the sequence with every ajax sort operation. Unfortunately, this requires an update to each item of the list every time someone sorts. This is fine for small lists, but what's the best way to implement sorting for larger lists that are constantly updated? I am looking for something that minimizes DB IO.

    Read the article

  • mysql filtering result using left outer join

    - by user288178
    my query: SELECT content.*, activity_log.content_id FROM content LEFT JOIN activity_log ON content.id = activity_log.content_id AND sess_id = '$sess_id' WHERE activity_log.content_id IS NULL AND visibility = $visibility AND content.reported < ".REPORTED_LIMIT." AND content.file_ready = 1 LIMIT 1 The purpose of that query is to get 1 row from the content table that has not been viewed by the user (identified by session_id), but it still returns contents that have been viewed. What is wrong? ( I have checked the table making sure that the content_ids are there) Note: I think this is more efficient than using subqueries, thoughts?

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >