Search Results

Search found 5153 results on 207 pages for 'unique ptr'.

Page 165/207 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Do programmers need a union?

    - by James A. Rosen
    In light of the acrid responses to the intellectual property clause discussed in my previous question, I have to ask: why don't we have a programmers' union? There are many issues we face as employees, and we have very little ability to organize and negotiate. Could we band together with the writers', directors', or musicians' guilds, or are our needs unique? Has anyone ever tried to start one? If so, why did it fail? (Or, alternatively, why have I never heard of it, despite its success?) later: Keith has my idea basically right. I would also imagine the union being involved in many other topics, including: legal liability for others' use/misuse of our work, especially unintended uses evaluating the quality of computer science and software engineering higher education programs -- unlike many other engineering disciplines, we are not required to be certified on receiving our Bachelor's degrees evangelism and outreach -- especially to elementary school students certification -- not doing it, but working with the companies like ISC(2) and others to make certifications meaningful and useful continuing education -- similar to previous conferences -- maintain a go-to list of organizers and other resources our members can use I would see it less so as a traditional trade union, with little emphasis on: pay -- we tend to command fairly good salaries outsourcing and free trade -- most of use tend to be pretty free-market oriented working conditions -- we're the only industry with Aeron chairs being considered anything like "standard"

    Read the article

  • CUDA Kernel Not Updating Global Variable

    - by Taher Khokhawala
    I am facing the following problem in a CUDA kernel. There is an array "cu_fx" in global memory. Each thread has a unique identifier jj and a local loop variable ii and a local float variable temp. Following code is not working. It is not at all changing cu_fx[jj]. At the end of loop cu_fx[jj] remains 0. ii = 0; cu_fx[jj] = 0; while(ii < l) { if(cu_y[ii] > 0) cu_fx[jj] += (cu_mu[ii]*cu_Kernel[(jj-start_row)*Kernel_w + ii]); else cu_fx[jj] -= (cu_mu[ii]*cu_Kernel[(jj-start_row)*Kernel_w + ii]); ii++; } But when I rewrite it using a temporary variable temp, it works fine. ii = 0; temp = 0; while(ii < l) { if(cu_y[ii] > 0) temp += (cu_mu[ii]*cu_Kernel[(jj-start_row)*Kernel_w + ii]); else temp -= (cu_mu[ii]*cu_Kernel[(jj-start_row)*Kernel_w + ii]); ii++; } cu_fx[jj] = temp; Can somebody please help with this problem. Thanking in advance.

    Read the article

  • Combining data sets without losing observations in SAS

    - by John
    Hye guys, I know, another post another problem :D :(. I took a screenshot to easily explain my problem. http://i39.tinypic.com/rhms0h.jpg As you can see I want to merge two tables (again), the Base & Analyst table. What I want to achieve is displayed in the right bottom corner table. I’m calculating the number of total analysts and female analysts for each month in the analyst table. In the base table I have different observations for one company (here company Alcoa with ticker AA). When I use the following command: data want; merge base analyst; by month ; run; I get the right up corner problem. My observations in the main table are being narrowed down to only 4 observations (for each different year one observation, 2001, 2002, 2005, 2006). What I want is that the observations are not reduced but that for every year the same data is being placed as shown in the right bottom corner. What am I missing in my merge command? In both tables I have month as a time count variable ( the observations in my base table are monthly) on which I need to merge. For clarity I added 2 screenshots of my real databases in SAS. The base table: http://i42.tinypic.com/dr5jky.jpg The analyst table: http://i40.tinypic.com/eqpmqq.jpg Here is what my merged table looks like: http://i43.tinypic.com/116i62s.jpg You can clearly see that the merged table only has four observations left for AA (one for each unique year) instead of the original 8. Anyone an idea to solve this?

    Read the article

  • How to delete duplicate vectors within a multidimensional vector?

    - by David
    I have a vector of vectors: vector< vector<int> > BigVec; It contains an arbitrary number of vectors, each of an arbitrary size. I want to delete not duplicate elements of each vector, but any vectors that are the exact same as another. I don't need to preserve the order of the vectors so I can sort etc.. It should be a really simple problem to solve but I'm new to this, my (not-working) best effort: for (int i = 0; i < BigVec.size(); i++) { for (int j = 1; j < BigVec.size() ; j++ ) { if (BigVec[i][0] == BigVec [j][i]); { BigVec.erase(BigVec.begin() + j); i = 0; // because i get the impression deleting a j = 1; // vector messes up a simple iteration through } } } I think there might be a solution using Unique(), but I can't get that to work either.

    Read the article

  • What would be the time complexity of counting the number of all structurally different binary trees?

    - by ktslwy
    Using the method presented here: http://cslibrary.stanford.edu/110/BinaryTrees.html#java 12. countTrees() Solution (Java) /** For the key values 1...numKeys, how many structurally unique binary search trees are possible that store those keys? Strategy: consider that each value could be the root. Recursively find the size of the left and right subtrees. */ public static int countTrees(int numKeys) { if (numKeys <=1) { return(1); } else { // there will be one value at the root, with whatever remains // on the left and right each forming their own subtrees. // Iterate through all the values that could be the root... int sum = 0; int left, right, root; for (root=1; root<=numKeys; root++) { left = countTrees(root-1); right = countTrees(numKeys - root); // number of possible trees with this root == left*right sum += left*right; } return(sum); } } I have a sense that it might be n(n-1)(n-2)...1, i.e. n!

    Read the article

  • mEncrypt/Decrypt binary mp3 with mcrypt, missing mimetype

    - by Jeremy Dicaire
    I have a script that read a mp3 file and encrypt it, I want to be able to decrypt this file and convert it to base64 so it can play in html5. Key 1 will be stored on the page and static, key2 will be unique for each file, for testing I used: $key1 = md5(time()); $key2 = md5($key1.time()); Here is my encode php code : //Get file content $file = file_get_contents('test.mp3'); //Encrypt file $Encrypt = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, $key1, $file, MCRYPT_MODE_CBC, $key2); $Encrypt = trim(base64_encode($Encrypt)); //Create new file $fileE = "test.mp3e"; $fileE = fopen($file64, 'w') or die("can't open file"); //Put crypted content fwrite($fileE, $Encrypt); //Close file fclose($fileE); Here is the code that doesnt work (decoded file is same size, but no mimetype): //Get file content $fileE = file_get_contents('test.mp3e'); //Decode $fileDecoded = base64_decode($fileE); //Decrypt file $Decrypt = mcrypt_decrypt(MCRYPT_RIJNDAEL_256, $key1, $fileDecoded, MCRYPT_MODE_CBC, $key2); $Decrypt = trim($Decrypt); //Create new file $file = "test.mp3"; $file = fopen($file, 'w') or die("can't open file"); //Put crypted content fwrite($file, $Decrypt); //Close file fclose($file);

    Read the article

  • Google App Engine - Uploading blobs and authentication

    - by Keyur
    (I tried asking this on the GAE forums but didn't get an answer so am trying it here.) Currently to upload blobs, the app engine's blob store service creates a unique one- time URL that a user can post blobs to. My requirement is that I only want authenticated / authorized users to post blobs in my application. I can achieve this currently if the page that includes the multipart form to upload blobs is in my application. However, I am looking to providing a "REST API" for my users to upload their blobs. While it is true that the one-time nature of the upload URL mitigates the chances of rogue use but it's still possible. I was wondering if there is anyone on the app engine team here that can consider a feature where developers can register an upload listener. (Or if there is already a way, I'll be all ears). A standard servlet filter could also potentially do the job. This will give us an opportunity to authenticate / validate / decorate requests before the request gets forwarded to the blob store service. Thanks, Keyur

    Read the article

  • Creating form using Generic_inlineformset_factory from the Model Form

    - by Prateek
    hello dear all, I wanted to create a edit form with the help of ModelForm. and my models contain a Generic relation b/w classes, so if any one could suggest me the view and a bit of template for the purpose I would be very thankful, as I am new to the language. My models look like:- class Employee(Person): nickname = models.CharField(_('nickname'), max_length=25, null=True, blank=True) blood_type = models.CharField(_('blood group'), max_length=3, null=True, blank=True, choices=BLOOD_TYPE_CHOICES) marital_status = models.CharField(_('marital status'), max_length=1, null=True, blank=True, choices=MARITAL_STATUS_CHOICES) nationality = CountryField(_('nationality'), default='IN', null=True, blank=True) about = models.TextField(_('about'), blank=True, null=True) dependent = models.ManyToManyField(Dependent, through='DependentRelationship') pan_card_number = models.CharField(_('PAN card number'), max_length=50, blank=True, null=True) policy_number = models.CharField(_('policy number'), max_length=50, null=True, blank=True) # code specific details user = models.OneToOneField(User, blank=True, null=True, verbose_name=_('user')) class Person(models.Model): """Person model""" title = models.CharField(_('title'), max_length=20, null=True, blank=True) first_name = models.CharField(_('first name'), max_length=100) middle_name = models.CharField(_('middle name'), max_length=100, null=True, blank=True) last_name = models.CharField(_('last name'), max_length=100, null=True, blank=True) suffix = models.CharField(_('suffix'), max_length=20, null=True, blank=True) slug = models.SlugField(_('slug'), max_length=50, unique=True) class PhoneNumber(models.Model) : phone_number = generic.GenericRelation('PhoneNumber') email_address = generic.GenericRelation('EmailAddress') address = generic.GenericRelation('Address') date_of_birth = models.DateField(_('date of birth'), null=True, blank=True) gender = models.CharField(_('gender'), max_length=1, null=True, blank=True, choices=GENDER_CHOICES) content_type = models.ForeignKey(ContentType, If anyone could suggest me a link or so. it would be a great help........

    Read the article

  • nhibernate error recovery

    - by Berryl
    I downloaded Rhino Security today and started going through some of the tests. Several that run perfectly in isolation start getting errors after one that purposely raises an exception runs though. Here is that test: [Test] public void EntitesGroup_CanCreate() { var group = _authorizationRepository.CreateEntitiesGroup("Accounts"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<EntitiesGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } And here are the tests and error messages that fail: [Test] public void User_CanSave() { var ayende = new User {Name = "ayende"}; _session.Save(ayende); _session.Flush(); _session.Evict(ayende); var fromDb = _session.Get<User>(ayende.Id); Assert.That(fromDb, Is.Not.Null); Assert.That(ayende.Name, Is.EqualTo(fromDb.Name)); } ----> System.Data.SQLite.SQLiteException : Abort due to constraint violation column Name is not unique [Test] public void UsersGroup_CanCreate() { var group = _authorizationRepository.CreateUsersGroup("Admininstrators"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<UsersGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } failed: NHibernate.AssertionFailure : null id in Rhino.Security.Tests.User entry (don't flush the Session after an exception occurs) Does anyone see how I can reset the state of the in memory SQLite db after the first test? I changed the code to use nunit instead of xunit so maybe that is part of the problem here as well. Cheers, Berryl

    Read the article

  • Getting value of "i" from GEvent

    - by Cosizzle
    Hello, I'm trying to add an event listener to each icon on the map when it's pressed. I'm storing the information in the database and the value that I'm wanting to retrive is "i" however when I output "i", I get it's last value which is 5 (there are 6 objects being drawn onto the map) Below is the code, what would be the best way to get the value of i, and not the object itself. var drawLotLoc = function(id) { var lotLoc = new GIcon(G_DEFAULT_ICON); // create icon object lotLoc.image = url+"images/markers/lotLocation.gif"; // set the icon image lotLoc.shadow = ""; // no shadow lotLoc.iconSize = new GSize(24, 24); // set the size var markerOptions = { icon: lotLoc }; $.post(opts.postScript, {action: 'drawlotLoc', id: id}, function(data) { var markers = new Array(); // lotLoc[x].description // lotLoc[x].lat // lotLoc[x].lng // lotLoc[x].nighbourhood // lotLoc[x].lot var lotLoc = $.evalJSON(data); for(var i=0; i<lotLoc.length; i++) { var spLat = parseFloat(lotLoc[i].lat); var spLng = parseFloat(lotLoc[i].lng); var latlng = new GLatLng(spLat, spLng) markers[i] = new GMarker(latlng, markerOptions); myMap.addOverlay(markers[i]); GEvent.addListener(markers[i], "click", function() { console.log(i); // returning 5 in all cases. // I _need_ this to be unique to the object being clicked. console.log(this); }); } });

    Read the article

  • Mysql - Help me alter this search query to get desired results

    - by sandeepan-nath
    Following is a dump of the tables and data needed to answer understand the system:- The system consists of tutors and classes. The data in the table All_Tag_Relations stores tag relations for each tutor registered and each class created by a tutor. The tag relations are used for searching classes. CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `All_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, `id_wc` int(10) unsigned default NULL, KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `All_Tag_Relations` (`id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, NULL), (2, 1, NULL), (3, 1, 1), (4, 1, 1), (6, 2, NULL), (7, 2, NULL), (5, 2, 2), (4, 2, 2); Following is my query:- This query searches for "first class" (tag for first = 3 and for class = 4, in Tags table) and returns all those classes such that both the terms first and class are present in the class name. SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 And it returns the class with id_wc = 1. But, I want the search to show all those classes such that all the search terms are present in the class name or its tutor name So that searching "Sandeepan class" (wtagrels.id_tag = 1,4) or "Sandeepan Nath" also returns the class with id_wc=1. And Searching. Searching "Bob First" should not return any classes. Please modify the above query or suggest a new query, if possible using MyIsam - fulltext search, but somehow help me get the result.

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • Will rel=canonical break site: queries ?

    - by Justin Grant
    Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this: http://ourproduct.com/documentation/version/pageid Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs: http://ourproduct.com/documentation/3.0/configuration-best-practices http://ourproduct.com/documentation/4.0/configuration-best-practices This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this: configuration site:ourproduct.com/documentation/4.0 But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions) But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result? Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.

    Read the article

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • Hibernate Auto-Increment Setup

    - by dharga
    How do I define an entity for the following table. I've got something that isn't working and I just want to see what I'm supposed to do. USE [BAMPI_TP_dev] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[MemberSelectedOptions]( [OptionId] [int] NOT NULL, [SeqNo] [smallint] IDENTITY(1,1) NOT NULL, [OptionStatusCd] [char](1) NULL ) ON [PRIMARY] GO SET ANSI_PADDING OFF This is what I have already that isn't working. @Entity @Table(schema="dbo", name="MemberSelectedOptions") public class MemberSelectedOption extends BampiEntity implements Serializable { @Embeddable public static class MSOPK implements Serializable { private static final long serialVersionUID = 1L; @Column(name="OptionId") int optionId; @GeneratedValue(strategy=GenerationType.IDENTITY) @Column(name="SeqNo", unique=true, nullable=false) BigDecimal seqNo; //Getters and setters here... } private static final long serialVersionUID = 1L; @EmbeddedId MSOPK pk = new MSOPK(); @Column(name="OptionStatusCd") String optionStatusCd; //More Getters and setters here... } I get the following ST. [5/25/10 15:49:40:221 EDT] 0000003d JDBCException E org.slf4j.impl.JCLLoggerAdapter error Cannot insert explicit value for identity column in table 'MemberSelectedOptions' when IDENTITY_INSERT is set to OFF. [5/25/10 15:49:40:221 EDT] 0000003d AbstractFlush E org.slf4j.impl.JCLLoggerAdapter error Could not synchronize database state with session org.hibernate.exception.SQLGrammarException: could not insert: [com.bob.proj.ws.model.MemberSelectedOption] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:90) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2285) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2678) at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:366) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at com.bcbst.bamp.ws.dao.MemberSelectedOptionDAOImpl.saveMemberSelectedOption(MemberSelectedOptionDAOImpl.java:143) at com.bcbst.bamp.ws.common.AlertReminder.saveMemberSelectedOptions(AlertReminder.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • Slow retrieval of data in SQLITE takes a long using ContentProvider

    - by Arlyn
    I have an application in Android (running 4.0.3) that stores a lot of data in Table A. Table A resides in SQLite Database. I am using a ContentProvider as an abstraction layer above the database. Lots of data here means almost 80,000 records per month. Table A is structured like this: String SQL_CREATE_TABLE = "CREATE TABLE IF NOT EXISTS " + TABLE_A + " ( " + COLUMN_ID + " INTEGER PRIMARY KEY NOT NULL" + "," + COLUMN_GROUPNO + " INTEGER NOT NULL DEFAULT(0)" + "," + COLUMN_TIMESTAMP + " DATETIME UNIQUE NOT NULL" + "," + COLUMN_TAG + " TEXT" + "," + COLUMN_VALUE + " REAL NOT NULL" + "," + COLUMN_DEVICEID + " TEXT NOT NULL" + "," + COLUMN_NEW + " NUMERIC NOT NULL DEFAULT(1)" + " )"; Here is the index statement: String SQL_CREATE_INDEX_TIMESTAMP = "CREATE INDEX IF NOT EXISTS " + TABLE_A + "_" + COLUMN_TIMESTAMP + " ON " + TABLE_A + " (" + COLUMN_TIMESTAMP + ") "; I have defined the columns as well as the table name as String Constants. I am already experiencing significant slow down when retrieving this data from Table A. The problem is that when I retrieve data from this table, I first put it in an ArrayList and then I display it. Obviously, this is possibly the wrong way of doing things. I am trying to find a better way to approach this problem using a ContentProvider. But this is not the problem that bothers me. The problem is for some reason, it takes a lot longer to retrieve data from other tables which have only upto 12 records maximum. I see this delay increase as the number of records in Table A increase. This does not make any sense. I can understand the delay if I retrieve data from Table A, but why the delay in retrieving data from other tables. To clarify, I do not experience this delay if Table A is empty or has less than 3000 records. What could be the problem?

    Read the article

  • Generate a set of strings with maximum edit distance

    - by Kevin Jacobs
    Problem 1: I'd like to generate a set of n strings of fixed length m from alphabet s such that the minimum Levenshtein distance (edit distance) between any two strings is greater than some constant c. Obviously, I can use randomization methods (e.g., a genetic algorithm), but was hoping that this may be a well-studied problem in computer science or mathematics with some informative literature and an efficient algorithm or three. Problem 2: Same as above except that adjacent characters cannot repeat; the i'th character in each string may not be equal to the i+1'th character. E.g., 'CAT', 'AGA' and 'TAG' are allowed, 'GAA', 'AAT', and 'AAA' are not. Background: The basis for this problem is bioinformatic and involves designing unique DNA tags that can be attached to biologically derived DNA fragments and then sequenced using a fancy second generation sequencer. The goal is to be able to recognize each tag, allowing for random insertion, deletion, and substitution errors. The specific DNA sequencing technology has a relatively low error rate per base (~1%), but is less precise when a single base is repeated 2 or more times (motivating the additional constraints imposed in problem 2).

    Read the article

  • PL/SQL - How to pull data from 3 tables based on latest created date

    - by Nancy
    Hello, I'm hoping someone can help me as I've been stuck on this problem for a few days now. Basically I'm trying to pull data from 3 tables in Oracle: 1) Orders Table 2) Vendor Table and 3) Master Data Table. Here's what the 3 tables look like: Table 1: BIZ_DOC2 (Orders table) OBJECTID (Unique key) UNIQUE_DOC_NAME (Document Name i.e. ORD-005) CREATED_AT (Date the order was created) Table 2: UDEF_VENDOR (Vendors Table): PARENT_OBJECT_ID (This matches up to the ObjectId in the Orders table) VENDOR_OBJECT_NAME (This is the name of the vendor i.e. Acme) Table 3: BIZ_UNIT (Master Data table) PARENT_OBJECT_ID (This matches up to the ObjectID in the Orders table) BIZ_UNIT_OBJECT_NAME (This is the name of the business unit i.e. widget A, widget B) Note: The Vendors Table and Master Data do not have a link between them except through the Orders table. I can join all of the data from the tables and it looks something like this: Before selecting latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 ORD-004 | Widget C | Acme | 3/10/10 Ideally I'd like to return the latest order for each vendor. However, each order may contain multiple business units (e.g. types of widgets) so if a Vendor's latest record is ORD-005 and the order contains 2 business units, here's what the result set should look like by the following columns: UNIQUE_DOC_NAME, BIZ_UNIT_OBJECT_NAME, VENDOR_OBJECT_NAME, CREATED_AT After selecting by latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 I tried using Select Max and several variations of sub-queries but I just can't seem to get it working. Any help would be hugely appreciated!

    Read the article

  • Improve performance of searching JSON object with jQuery

    - by cale_b
    Please forgive me if this is answered on SO somewhere already. I've searched, and it seems as though this is a fairly specific case. Here's an example of the JSON (NOTE: this is very stripped down - this is dynamically loaded, and currently there are 126 records): var layout = { "2":[{"id":"40","attribute_id":"2","option_id":null,"design_attribute_id":"4","design_option_id":"131","width":"10","height":"10", "repeat":"0","top":"0","left":"0","bottom":"0","right":"0","use_right":"0","use_bottom":"0","apply_to_options":"0"}, {"id":"41","attribute_id":"2","option_id":"115","design_attribute_id":"4","design_option_id":"131","width":"2","height":"1", "repeat":"0","top":"0","left":"0","bottom":"4","right":"2","use_right":"0","use_bottom":"0","apply_to_options":"0"}, {"id":"44","attribute_id":"2","option_id":"118","design_attribute_id":"4","design_option_id":"131","width":"10","height":"10", "repeat":"0","top":"0","left":"0","bottom":"0","right":"0","use_right":"0","use_bottom":"0","apply_to_options":"0"}], "5":[{"id":"326","attribute_id":"5","option_id":null,"design_attribute_id":"4","design_option_id":"154","width":"5","height":"5", "repeat":"0","top":"0","left":"0","bottom":"0","right":"0","use_right":"0","use_bottom":"0","apply_to_options":"0"}] }; I need to match the right combination of values. Here's the function I currently use: function drawOption(attid, optid) { var attlayout = layout[attid]; $.each(attlayout, function(k, v) { // d_opt_id and d_opt_id are global scoped variable set elsewhere if (v.design_attribute_id == d_att_id && v.design_option_id == d_opt_id && v.attribute_id == attid && ((v.apply_to_options == 1 || (v.option_id === optid)))) { // Do stuff here } }); } The issue is that I might iterate through 10-15 layouts (unique attid's), and any given layout (attid) might have as many as 50 possibilities, which means that this loop is being run A LOT. Given the multiple criteria that have to be matched, would an AJAX call work better? (This JSON is dynamically created via PHP, so I could craft a PHP function that could possibly do this more efficently), or am I completely missing something about how to find items in a JSON object? As always, any suggestions for improving the code are welcome! EDIT: I apologize for not making this clear, but the purpose of this question is to find a way to improve the performance. The page has a lot of javascript, and this is a location where I know that performance is lower than it could be.

    Read the article

  • [UNIX] Sort lines of massive file by number of words on line (ideally in parallel)

    - by conradlee
    I am working on a community detection algorithm for analyzing social network data from Facebook. The first task, detecting all cliques in the graph, can be done efficiently in parallel, and leaves me with an output like this: 17118 17136 17392 17064 17093 17376 17118 17136 17356 17318 12345 17118 17136 17356 17283 17007 17059 17116 Each of these lines represents a unique clique (a collection of node ids), and I want to sort these lines in descending order by the number of ids per line. In the case of the example above, here's what the output should look like: 17118 17136 17356 17318 12345 17118 17136 17356 17283 17118 17136 17392 17064 17093 17376 17007 17059 17116 (Ties---i.e., lines with the same number of ids---can be sorted arbitrarily.) What is the most efficient way of sorting these lines. Keep the following points in mind: The file I want to sort could be larger than the physical memory of the machine Most of the machines that I'm running this on have several processors, so a parallel solution would be ideal An ideal solution would just be a shell script (probably using sort), but I'm open to simple solutions in python or perl (or any language, as long as it makes the task simple) This task is in some sense very easy---I'm not just looking for any old solution, but rather for a simple and above all efficient solution

    Read the article

  • Embedding Lua functions as member variables in Java

    - by Zarion
    Although the program I'm working on is in Java, answering this from a C perspective is also fine, considering that most of this is either language-agnostic, or happens on the Lua side of things. In the outline I have for the architecture of a game I'm programming, individual types of game objects within a particular class (eg: creatures, items, spells, etc.) are loaded from a data file. Most of their properties are simple data types, but I'd like a few of these members to actually contain simple scripts that define, for example, what an item does when it's used. The scripts will be extremely simple, since all fundamental game actions will be exposed through an API from Java. The Lua is simply responsible for stringing a couple of these basic functions together, and setting arguments. The question is largely about the best way to store a reference to a specific Lua function as a member of a Java class. I understand that if I store the Lua code as a string and call lua_dostring, Lua will compile the code fresh every time it's called. So the function needs to be defined somehow, and a reference to this specific function wrapped in a Java function object. One possibility that I've considered is, during the data loading process, when the loader encounters a script definition in a data file, it extracts this string, decorates the function name using the associated object's unique ID, calls lua_dostring on the string containing a full function definition, and then wraps the generated function name in a Java function object. A function declared in script run with lua_dostring should still be added to the global function table, correct? I'm just wondering if there's a better way of going about this. I admit that my knowledge of Lua at this point is rather superficial and theoretical, so it's possible that I'm overlooking something obvious.

    Read the article

  • How to merge duplicates in 2D python arrays

    - by Wei Lou
    Hi, I have a set of data similar to this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3_1 RTT(Avr):620ms 3 13:19:12.754 Ping3_2 RTT(Avr):210ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms ... when i parse through it, i need merge the results of Ping3_1 and Ping3_2. Then take average of those two row export as one row. So the end of result would be like this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3 RTT(Avr):415ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms currently i am concatenating column 0 and 1 to make a unique key, find duplication there then doing rest of special treatment for those parallel Pings. It is not elegant at all. Just wonder what is the better way to do it. Thanks!

    Read the article

  • Javascript : Submitting a form outside the actual form doesn't work

    - by Ben Fransen
    Hello all, I'm trying to achieve a fairly easy triggering mechanism for deleting multiple items from a tablegrid. If a user has enough access he/she is able to delete multiple users from a table. In the table I have set up checkboxes, one per row/user. The name of the checkboxes is UsersToDeletep[], and the value per row is the unique UserID. When a user clicks the button 'Delete selected users' a simple validation takes place to make sure at least one checkbox is selected. After that I call my simple function Submit(form). The function works perfectly when called within the form-tags, where I also use it to delete a single user. The function: function Submit(form) { document.forms[form].submit(); } I've also alerted document.forms[form]. The result is, as expected [object HTMLFormElement]. But for some reason the form just won't submit and a pagereload takes place. I'm a bit confused and can't seem to figure out what I'm doing wrong. Can anyone point me in the right direction? Thanks in advance! Ben

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >