Search Results

Search found 5491 results on 220 pages for 'sound scheme'.

Page 170/220 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • Using ember-resource with couchdb - how can i save my documents?

    - by Thomas Herrmann
    I am implementing an application using ember.js and couchdb. I choose ember-resource as database access layer because it nicely supports nested JSON documents. Since couchdb uses the attribute _rev for optimistic locking in every document, this attribute has to be updated in my application after saving the data to the couchdb. My idea to implement this is to reload the data right after saving to the database and get the new _rev back with the rest of the document. Here is my code for this: // Since we use CouchDB, we have to make sure that we invalidate and re-fetch // every document right after saving it. CouchDB uses an optimistic locking // scheme based on the attribute "_rev" in the documents, so we reload it in // order to have the correct _rev value. didSave: function() { this._super.apply(this, arguments); this.forceReload(); }, // reload resource after save is done, expire to make reload really do something forceReload: function() { this.expire(); // Everything OK up to this location Ember.run.next(this, function() { this.fetch() // Sub-Document is reset here, and *not* refetched! .fail(function(error) { App.displayError(error); }) .done(function() { App.log("App.Resource.forceReload fetch done, got revision " + self.get('_rev')); }); }); } This works for most cases, but if i have a nested model, the sub-model is replaced with the old version of the data just before the fetch is executed! Interestingly enough, the correct (updated) data is stored in the database and the wrong (old) data is in the memory model after the fetch, although the _rev attribut is correct (as well as all attributes of the main object). Here is a part of my object definition: App.TaskDefinition = App.Resource.define({ url: App.dbPrefix + 'courseware', schema: { id: String, _rev: String, type: String, name: String, comment: String, task: { type: 'App.Task', nested: true } } }); App.Task = App.Resource.define({ schema: { id: String, title: String, description: String, startImmediate: Boolean, holdOnComment: Boolean, ..... // other attributes and sub-objects } }); Any ideas where the problem might be? Thank's a lot for any suggestion! Kind regards, Thomas

    Read the article

  • Releasing instance if service not enabled?

    - by fuzzygoat
    I would just like to check if I have this right, I am creating an instance of CCLocationManager and then checking if location services are enabled. If it is not enabled I then report an error, release the instance and carry on, does that look/sound right? locationManager = [[CLLocationManager alloc] init]; BOOL supportsService = [locationManager locationServicesEnabled]; if(supportsService) { [locationManager setDelegate:self]; [locationManager setDistanceFilter:kCLDistanceFilterNone]; [locationManager setDesiredAccuracy:kCLLocationAccuracyBest]; [locationManager startUpdatingLocation]; } else { NSLog(@"Location services not enabled."); [locationManager release]; } ... ... ... more code cheers gary

    Read the article

  • Basic Login Script using php and mysql inquiry

    - by Matt
    Attempting to write a check for a login script to see if the username is available. Would the best way to write this query be to check if isset(!_POST[]) for both values (nick and pass) then connect to database WHERE the mysql database for the usernick requested return the user id if the usernick exists evaluate if isset($id) to see if the user name is taken and use that to continue to creating an entry Does this logically sound like a method to check for login without using excessive code sorry for not posting the code, it is on another computer and this computer is locked down by my administrator at work... Also, is there another way to evaluate if a value exists in the database? For instance, instead of setting $id to the return value of the mysql database can i just ping the mysql database for the information and have it return a Boolean result so I am not putting out any user information. Thanks, Matt

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Should I be building GUI applications on Windows using Perl & Tk?

    - by CheeseConQueso
    I have a bunch of related Perl scripts that I would like to put together in one convenient place. So I was thinking of building a GUI and incorporating the scripts. I'm using Strawberry Perl on Windows XP and have just installed Tk from cpan about fifteen minutes ago. Before I go for it, I want some sound advice either for or against it. My other option is to translate the Perl scripts into VB and use Visual Studio 2008, but that might be too much hassle for an outcome that might end up all the same had I just stuck with Perl & Tk. I haven't looked yet, but maybe there is a module for Visual Studio that would allow me to invoke Perl scripts? The main requirements are: It must be able to communicate with MySQL It must be able to fetch & parse XML files from the internet It must be transportable, scalable, and sustainable What direction would you take?

    Read the article

  • What happened to the Windows "Midi Mapper"

    - by interstar
    I wrote a windows program many years ago, which created music by sending notes to the "midi mapper" (and thence to the midi-synth on my sound-card) Today, I have a soft-synth which, allegedly accepts midi information, so I'd assume it should be possible to use today's equivalent of a midi-mapper to route the midi output from my program to the soft-synth. There's clearly no longer a midi-mapper application in windows, but my program still works (on XP) in that it drives the built-in soundcard synth, so there must be some sort of midi handling layer in windows. How can I get at this? And maybe redirect the midi to the soft-synth?

    Read the article

  • iPhone Accelerometer > csv > email

    - by Bradley Powers
    Hi all, I'm trying to collect data for a machine learning project I'm working on. What I'd like to do is collect accelerometer data from an iPhone, save it to a csv and email it to myself. My app currently is able to acquire data from the accelerometer, but I'm at a bit of a loss as to how to proceed. First of all, I'd like to acquire data for a preset amount of time (after playing a sound to the user) which I don't really know how to do, and I can't find good documentation for. Also, I'd like to save that to a csv, which there is some documentation on (specifically using the NSString writeToFile method). Any recommendations/ ideas? Thanks!

    Read the article

  • Android : Handle OAuth callback using intent-filter

    - by Dave Allison
    I am building an Android application that requires OAuth. I have all the OAuth functionality working except for handling the callback from Yahoo. I have the following in my AndroidManifest.xml : <intent-filter> <action android:name="android.intent.action.VIEW"></action> <category android:name="android.intent.category.DEFAULT"></category> <category android:name="android.intent.category.BROWSABLE"></category> <data android:host="www.test.com" android:scheme="http"></data> </intent-filter> Where www.test.com will be substituted with a domain that I own. It seems : This filter is triggered when I click on a link on a page. It is not triggered on the redirect by Yahoo, the browser opens the website at www.test.com It is not triggered when I enter the domain name directly in the browser. So can anybody help me with When exactly this intent-filter will be triggered? Any changes to the intent-filter or permissions that will widen the filter to apply to redirect requests? Any other approaches I could use? Thanks for your help.

    Read the article

  • Is there a FSRef in iPhone SDK or is there something that can be FSRef's alternative?

    - by unknownthreat
    The question may sound stupid, but the thing is this: I am learning how to use a Audio Queue, and the example I've taken (aqtest) has been a nice guide for me until I recently found out that aqtest is not for iPhone. (stupid me) I served around the Internet and found out that there is no FSRef for iPhone. If possible, I want to find a way to work around FSRef thinggie. So here comes the question: can I use something else instead of FSRef that exists on the iPhone SDK? Or am I missing something?

    Read the article

  • SOAP - Why do I need to query for the values for an update?

    - by Phill Pafford
    I'm taking over a project and wanted to understand if this is common practice using SOAP. The process that is currently in place I have to query all the values before I do an update cause I need to pass back all the values that are not being updated. Does this sound right? Example Values: fname=phill lname=pafford address=123 main phone:222-555-1212 So if I just wanted to update the phone number I need to query for the record, get all the values and submit these values for an update. Example Update Values: fname=phill lname=pafford address=123 main phone:111-555-1212 I just want to know if this is common practice or should I change the functionality of this?

    Read the article

  • How do you limit a page with multiple flash mp3 players to play one at a time?

    - by Andrew.S
    I am working with the open source flash player at http://flash-mp3-player.net/ and I am trying to figure out how to limit one sound file at a time. I know this has been done on a number of sites but I am unsure how to approach it. Scenario: A page has five different instances of the flash player. The user is litening to one song but clicks on another to listen to it. Goal: The first audio file automatically stops while the second starts playing instead of both playing at the same time. Do I need to have some sort of javascript handler than interacts with the swf or something?

    Read the article

  • substitution cypher with different alphabet length

    - by seanizer
    I would like to implement a simple substitution cypher to mask private ids in URLs I know how my IDs will look like (combination of upperchase ascii, digits and underscore), and they will be rather long, as they are composed keys. I would like to use a longer alphabet to shorten the resulting codes (I'd like to use upper and lower case ascii letters, digits and nothing else). So my incoming alphabet would be [A-Z0-9_] (37 chars) and my outgoing alphabet would be [A-Za-z0-9] (62 chars) so a compression of almost 50% would be available. let's say my URLs look like this: /my/page/GFZHFFFZFZTFZTF_24_F34 and I want them to look like this instead: /my/page/Ft32zfegZFV5 Obviously both arrays would be shuffled to bring some random order in. This does not have to be secure. if someone figures it out: fine, but I don't want the scheme to be obvious. My desired solution would be to convert the string to an integer representation of radix 37, convert the radix to 62 and use the second alphabet to write out that number. is there any sample code available that does something similar? Integer.parseInt ( http://java.sun.com/javase/6/docs/api/java/lang/Integer.html#parseInt%28java.lang.String,%20int%29 ) has some similar logic, but it is hard-coded to use standard digit behavior Any hints? I am using java to implement this but code or pseudo-code in any other language is of course also helpful

    Read the article

  • Are functional programming languages good for practical tasks?

    - by Clueless
    It seems to me from my experimenting with Haskell, Erlang and Scheme that functional programming languages are a fantastic way to answer scientific questions. For example, taking a small set of data and performing some extensive analysis on it to return a significant answer. It's great for working through some tough Project Euler questions or trying out the Google Code Jam in an original way. At the same time it seems that by their very nature, they are more suited to finding analytical solutions than actually performing practical tasks. I noticed this most strongly in Haskell, where everything is evaluated lazily and your whole program boils down to one giant analytical solution for some given data that you either hard-code into the program or tack on messily through Haskell's limited IO capabilities. Basically, the tasks I would call 'practical' such as Aceept a request, find and process requested data, and return it formatted as needed seem to translate much more directly into procedural languages. The most luck I have had finding a functional language that works like this is Factor, which I would liken to a reverse-polish-notation version of Python. So I am just curious whether I have missed something in these languages or I am just way off the ball in how I ask this question. Does anyone have examples of functional languages that are great at performing practical tasks or practical tasks that are best performed by functional languages?

    Read the article

  • Designing a service for consumption on multiple mobile platforms

    - by Nate Bross
    I am building and designing a (mostly) read-only interface to some data. I'll be uing ASP.NET MVC to build a psudo-restful API. I'm wondering if anyone can provide some resources for building full-client applications for various mobile platforms, iPhone, Android, Blackberry, Windows Mobile, etc. I'm thinking that serving up XML data is going to be the most simple and universal, but parsing XML in objective-C for example doesn't sound like fun to me, but maybe there are some good libaries out there to help ease this task? In other words, what formt will be the quickest to implement on the client side? Are there any JSON parsrs for iPhone or Android? I know there are .NET JSON parsers, but not sure about other platforms -- is ther another format that might better? Or should I stick with pure XML and deal with it on each platform differently?

    Read the article

  • AVAudioRecorder prepareToRecord works, but record fails

    - by iPadDeveloper2011
    I just built and tested a basic AVAudioRecorder/AVAudioPlayer sound recorder and player. Eventually I got this working on the device, as well as the simulator. My player/recorder code is in a single UIView subclass. Unfortunately, when I copy the class into my main project, it no longer works (on the device--the simulator is fine). prepareToRecord is working fine, but record isn't. Here is some code: audioRecorder = [[ AVAudioRecorder alloc] initWithURL:url settings:recordSettings error:&error]; if ([audioRecorder prepareToRecord]){ audioRecorder.meteringEnabled = YES; if(![audioRecorder record])NSLog(@"recording failed!"); }else { int errorCode = CFSwapInt32HostToBig ([error code]); NSLog(@"preparedToRecord=NO Error: %@ [%4.4s])" , [error localizedDescription], (char*)&errorCode); ... I get "recording failed". Anyone have any ideas why this is happening?

    Read the article

  • How to draw flowchart for code involving opening from text file and reading them

    - by problematic
    like this code fp1=fopen("Fruit.txt","r"); if(fp1==NULL) { printf("ERROR in opening file\n"); return 1; } else { for(i=0;i<lines;i++)//reads Fruits.txt database { fgets(product,sizeof(product),fp1); id[i]=atoi(strtok(product,",")); strcpy(name[i],strtok(NULL,",")); price[i]=atof(strtok(NULL,",")); stock[i]=atoi(strtok(NULL,"\n")); } } fclose(fp1); These symbols sound too similar to differentiate their function,can anyone helps me by any method, or use names of shape according to this site http://www.breezetree.com/article-excel-flowchart-shapes.htm

    Read the article

  • What keying option does the keychain use?

    - by Rudiger
    I have read into the keychain and have found that it uses Triple DES. What I can't find is what keying option it uses. I am guessing / hoping that its keying option 1 where all 3 passwords are unique but if thats the case I can only think of two passwords it can use (user password and App ID that comes from your dev cert) so where is the third coming from? Is it a key private to Apple? If its keying option 2 (first and third key are the same) it might not be secure enough for our company to rely on. Although that might sound paranoid I have to justify to our security department that it is secure enough.

    Read the article

  • UIViewController memory management

    - by jAmi
    Hi I have a very basic issue of memory management with my UIViewController (or any other object that I create); The problem is that in Instruments my Object allocation graph is always rising even though I am calling release on then assigning them nil. I have 2 UIViewController sub-classes each initializing with a NIB; I add the first ViewController to the main window like [window addSubView:first.view]; Then in my first ViewController nib file I have a Button which loads the second ViewController like : -(IBAction)loadSecondView{ if(second!=nil){ //second is set as an iVar and @property (nonatomic, retain)ViewController2* sceond; [second release]; second=nil; } second=[[ViewController2* second]initWithNibName:@"ViewController2" bundle:nil]; [self.view addSubView:second.view]; } In my (second) ViewController2 i have a button with an action method -(IBAction) removeSecond{ [self.view removeFromSuperView]; } Please let me know if the above scheme works in a managed way for memory...? In Instruments It does not show release of any allocation and keeps the bar status graph keeps on rising.

    Read the article

  • What application domains are CPU bound and will tend to benefit from multi-core technologies?

    - by Glomek
    I hear a lot of people talking about the revolution that is coming in programming due to multi-core processors and parallelism, but I can't shake the feeling that for most of us, CPU cycles aren't the bottleneck. Pretty much all of my programs have been I/O bound in one way or another (database, filesystem, network, user interaction, etc.) for a very long time. Now I can think of a few areas where CPU cycles are a limiting factor, like code breaking, graphics, sound, some forms of simulation (weather, physics, etc.), and some forms of mathematical research, but they all seem like fairly specialized application domains. My general impression is that most programs are still I/O bound and that for most of our industry CPUs have been plenty fast for quite a while now. Am I off my rocker? What other application domains are CPU bound today? Do any of them include a large portion of the programming population? In essence, I'm wondering whether the multi-core CPUs will impact very many of us, and if so, how?

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • git-svn: reset tracking for master

    - by digitala
    I'm using git-svn to work with an SVN repository. My working copies have been created using git svn clone -s http://foo.bar/myproject so that my working copy follows the default directory scheme for SVN (trunk, tags, branches). Recently I've been working on a branch which was created using git-svn branch myremotebranch and checked-out using git checkout --track -b mybranch myremotebranch. I needed to work from multiple locations, so from the branch I git-svn dcommit-ed files to the SVN repository quite regularly. After finishing my changes, I switched back to the master and executed a merge, committed the merge, and tried to dcommit the successful merge to the remote trunk. It seems as though after the merge the remote tracking for the master has switched to the branch I was working on: # git checkout master # git merge mybranch ... (successful) # git add . # git commit -m '...' # git svn dcommit Committing to http://foo.bar/myproject/branches/myremotebranch ... # Is there a way I can update the master so that it's following remotes/trunk as before the merge? I'm using git 1.7.0.5, if that's any help.

    Read the article

  • SOS: AudioFormat when writing to file in FreeTTS

    - by user330793
    Very annoying problem. I have developed a freeTTS application of the freetts class that write captured audio to file however I am having some very annoying problems. When setting the audio player to singlefileaudio player I try to also set the audioformat with my own default values for sampleRate, sampleSizeInBits, channels, signed and bigEndian. Now I access AudioPlayer.get methods to show these values in runtime just to ensure they are set to what I set them and they match those values. However when file writing completes and I check the properties of the resulting wave file, they are set to the audioPlayer default settings. Normally this will be fine except I have to read the files into another application which has fixed audio property settings so I always get a resulting output that sounds like am fast forwarding the sound and listening to it at the same time. Obviously because of the different sampling rates. I need help please. Thanx, Henry

    Read the article

  • How can I make an event created through Google Calendar's API send an invitation email?

    - by Cebjyre
    I'm trying to create an event through the API and it is mostly working, with the exception that while the new events are being created in the invitees calendars, no emails are being sent. Creating the event from the web interface is pushing the event through, as well as sending the email (except one account that doesn't get any notifications at all, but that's not relevant to my current problem). The event I am trying to push in is: <entry xmlns='http://www.w3.org/2005/Atom' xmlns:gd='http://schemas.google.com/g/2005'> <category scheme='http://schemas.google.com/g/2005#kind' term='http://schemas.google.com/g/2005#event'></category> <title type='text'>test event</title> <content type='text'>content.</content> <gd:transparency value='http://schemas.google.com/g/2005#event.opaque'> </gd:transparency> <gd:eventStatus value='http://schemas.google.com/g/2005#event.confirmed'> </gd:eventStatus> <gd:where valueString='somewhere'></gd:where> <gd:who email="[redacted]" rel='http://schemas.google.com/g/2005#event.attendee' valueString='Me'><gd:attendeeStatus value='http://schemas.google.com/g/2005#event.invited'/></gd:who> <gd:who email="[redacted again]" rel='http://schemas.google.com/g/2005#event.organizer' valueString='Also Me'><gd:attendeeStatus value='http://schemas.google.com/g/2005#event.accepted'/></gd:who> <gd:when startTime='2010-05-18T15:30:00.000+10:00' endTime='2010-05-18T16:00:00.000+10:00'></gd:when> </entry> And when I request event lists I can't see any large difference between events created through the API and through the web interface.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >