Search Results

Search found 61631 results on 2466 pages for 'duplicate data'.

Page 758/2466 | < Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >

  • The Importance of Backing Up Your Website Or Blog

    Backing up a site or blog consists of storing files and data in another location. That way, if something should happen to your site or blog, you'll still have a copy of all the data. Backing up the information isn't all that difficult, and you can save a lot of time and effort in doing so.

    Read the article

  • Oracle Magazine, November/December 2008

    Oracle Magazine November/December features articles on our Editors' Choice Awards 2008, the new HP Oracle Database Machine, using task flows, Cursor FOR Loops, Oracle Data Access Components, Oracle Active Data Guard, SQL Developer and PL/SQL constructs, Oracle Database 11g, questions for Tom Kyte and much more.

    Read the article

  • Custom Business Application Development in PHP - Features & Benefits

    A revolution is taking place within the business application arena today. Just a few years back most custom business applications such as CRM, ERP, data mining and other business data information systems were inflexible and expensive applications. The database ran on a server within the company's compounds and each desktop machine was running a client application.

    Read the article

  • No Silverlight and Preloader Experience(ish) - in 10 seconds...

    here is the basic's...:<body background="Img/ScreenShot.png" ><object id="SilverlightControlObj" name="SilverlightControlObjNm"data="data:application/x-silverlight-2,"type="application/x-silverlight-2" width="100%"height="100%"><param name="source" value="ClientBin/SUGWTK.xap"/><param name="onError" value="onSilverlightError" /><param name="background" value="white" /><param name="minRuntimeVersion" value="3.0.40624.0" /><param name="autoUpgrade" value="true"...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Developing a Tablet Application for Sales Force

    Mobile access to data is becoming part of our lives. Not only in personal sphere, but especially in variety of industries can be seen a growing demand for mobile data access. People use laptops, smartphones and pocket PCs and the market is opening for tablets more than ever before. Let's find out what developers might use when developing tablet applications.

    Read the article

  • The Importance of Backing Up Your Website Or Blog

    Backing up a site or blog consists of storing files and data in another location. That way, if something should happen to your site or blog, you'll still have a copy of all the data. Backing up the information isn't all that difficult, and you can save a lot of time and effort in doing so.

    Read the article

  • Routing Internet traffic over specific network interfaces [on hold]

    - by dipamchang
    I want to route my internet traffic over all my available connections (like LAN and Data card(3G)), based on conditions like, if a website is blocked over LAN, that traffic goes through Data Card (or other available internet connection). My ultimate motive is to integrate this feature in my web browser which I have already built using C# and .Net framework. I have found that one can add a route by using the following cmd command - route add DestinationIP mask subnet InterfaceGatewayIP but I am stuck as to how should it be implemented using C#?

    Read the article

  • Which browsers handle `Content-Encoding: gzip` and which of them has any special requirements on encodinq quality?

    - by user1049847
    I am creating a "hand made" HTTP 1/0, 1/1 server. I recently integrated zip lib so now I can stream encoded gziped data in and out. I wonder Which major browsers (alive ones - IE6-IE10, Chrome, FF, etc) send Accept-Encoding: deflate, gzip, ... and so can handle Content-Encoding: gzip today? Which of them send any quality expectations? Which of them can send encoded gziped post request and multypart/form data to my server?

    Read the article

  • How to Increase Traffic to Websites by Using Keywords

    Keywords play an important role in order to increase traffic to websites. Every page on the Internet is indexed with some keywords and the search engines check the Internet pages with the keyword that you would have entered to search for any particular data or article data.

    Read the article

  • ???????/?????????????????????????????????

    - by Yusuke.Yamamoto
    ????? ??:2011/05/11 ??:??????/?? ????????????????????????????????????????????????????????????????????????????????????????????????????????????Oracle Advanced Security ????????????? Oracle Data Masking ????????????????????????????? Oracle Data Masking ????????????Oracle Advance Security ?????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/Masking05111500.wmv http://www.oracle.com/technetwork/jp/ondemand/db-technique/0511-1500-encry-masking-400262-ja.pdf

    Read the article

  • LINQ To SQL ignore unique constraint exception and continue

    - by Martin
    I have a single table in a database called Users Users ------ ID (PK, Identity) Username (Unique Index) I have setup a unique index on the Username table to prevent duplicates. I am then enumerating through a collection and creating a new user in the database for each item. What I want to do is just insert a new user and ignore the exception if the unique key constraint is violated (as it's clearly a duplicate record in that case). This is to avoid having to craft where not exists kind of queries. First off, is this going to be any more efficient or should my insert code be checking for duplicates instead? I'm drawn more to the database having that logic as this prevents any other type of client from inserting duplicate data. My other issue is related to LINQ To SQL. I have the following code: public class TestRepo { DatabaseDataContext database = new DatabaseDataContext(); public void Add(string username) { database.Users.InsertOnSubmit(new User() { Username = username }); } public void Save() { database.SubmitChanges(); } } And then I iterate over a collection and insert new users, ignoring any exceptions: TestRepo repo = new TestRepo(); foreach (var name in new string[] { "Tim", "Bob", "John" }) { try { repo.Add(name); repo.Save(); } catch { } } The first time this is run, great I have three users in the table. If I remove the second one and run this code again, nothing is inserted. I expected the first insert to fail with the exception, the second to succeed (as I just removed that item from the DB) and the third to then fail. What seems to be happening is that once the SqlException is thrown (even though the loop continues to iterate) all of the next inserts fail - even when there isn't a row in the table that would cause a unique violation. Can anyone explain this? P.S. The only workaround I could find was to instantiate the repo each time before the insert, then it worked exactly as excepted - indicating that it's something to do with the LINQ To SQL DataContext. Thanks.

    Read the article

  • Command /Developer/usr/bin/dsymutil failed with exit code 10

    - by Evan Robinson
    I am getting the above error message randomly (so far as I can tell) on iPhone projects. Occasionally it will go away upon either: Clean Restart XCode Reboot Reinstall XCode But sometimes it won't. When it won't the only solution I have found is to take all the source material, import it into a new project, and then redo all the connections in IB. Then I'm good until it strikes again. Anybody have any suggestions? [update 20091030] I have tried building both debug and release versions, both full and lite versions. I've also tried switching the debug symbols from DWARF with external dSYM file to DWARF and to stabs. Clean builds in all formats make no differences. Permission repairs change nothing. Setting up a new user has no effect. Same error on the builds. Thanks for the suggestions! [Update 20091031] Here's an easier and (apparently) reliable workaround. It hinges upon the discovery that the problem is linked to a target not a project In the same project file, create a new target Option-Drag (copy) all the files from the BAD target 'Copy Bundle Resources' folder to the NEW target 'Copy Bundle Resources' folder Repeat (2) with 'Compile Sources' and 'Link Binary With Libraries' Duplicate the Info.plist file for the BAD target and name it correctly for the NEW target. Build the NEW target! [Update 20100222] Apparently an IDE bug, now apparently fixed, although Apple does not allow direct access to the original bug of a duplicate. I can no longer reproduce this behaviour, so hopefully it is dead, dead, dead.

    Read the article

  • Update multiple rows with known keys without inserting new rows if nonexistent keys are found

    - by Kirzilla
    Hello, Let's imagine that we have table items... table: items item_id INT PRIMARY AUTO_INCREMENT title VARCHAR(255) views INT Let's imagine that it is filled with something like (1, item-1, 10), (2, item-2, 10), (3, item-3, 15) I want to make multi update view for this items from data taken from this array [item_id] = [views] '1' => '50', '2' => '60', '3' => '70', '5' => '10' IMPORTANT! Please note that we have item_id=5 in array, but we don't have item_id=5 in database. I can use INSERT ... ON DUPLICATE KEY UPDATE, but this way image_id=5 will be inserted into talbe items. How to avoid inserting new key? I just want item_id=5 be skipped because it is not in table. Of course, before execution I can select existing keys from items table; then compare with keys in array; delete nonexistent keys and perform INSERT ... ON DUPLICATE KEY UPDATE. But maybe there is some more elegant solutions? Thank you.

    Read the article

  • Install CLSQL on Mac OS X

    - by Ken
    I have SBCL installed (via macports/darwinports) on my Intel Core 2 Duo Macbook running 10.5.8. I've installed several libraries like this: (require 'asdf) (require 'asdf-install) (asdf-install:install 'cl-who) But when I tried to install CLSQL this way ('clsql) after it downloaded, I got this: ... ; registering #<SYSTEM CLSQL-UFFI {123D9E01}> as CLSQL-UFFI ; $ cd /Users/ken/.sbcl/site/clsql-5.0.5/uffi/; make cc -arch x86_64 -arch i386 -bundle /usr/lib/bundle1.o -flat_namespace -undefined suppress clsql_uffi.c -o clsql_uffi.dylib ld: duplicate symbol dyld_stub_binding_helper in /usr/lib/bundle1.o and /usr/lib/bundle1.o for architecture i386 ld: duplicate symbol dyld_stub_binding_helper in /usr/lib/bundle1.o and /usr/lib/bundle1.o for architecture x86_64 collect2: ld returned 1 exit status collect2: ld returned 1 exit status lipo: can't open input file: /var/folders/Nf/Nf4o5ArDFaWBH2OwtnWM3E+++TQ/-Tmp-//ccJyZxou.out (No such file or directory) make: *** [clsql_uffi.so] Error 1 Is there something I forgot, or some trick to get it to build on Mac OS X? I know very little about C libraries on the Mac these days, so I don't even know where to start on this. Thanks!

    Read the article

  • RESTful idempotence

    - by DutrowLLC
    I'm designing a RESTful web service utilizing ROA(Resource oriented architecture). I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key. From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts: Step 1: Get unique transaction id for creating the new PERSON resource::: **Client request:** GET /CREATE_PERSON **Server response:** 200 OK transaction-id:"as8yfasiob" Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id::: **Client request** PUT /CREATE_PERSON/{transaction_id} first_name="Big bubba" **Server response** 201 Created // (If the request is a duplicate, it would send this PersonKey="398u4nsdf" // same response without creating a new resource. It // would perhaps send an error response if the was used // on a transaction id non-duplicate request, but I have // control over the client, so I can guarantee that this // won't happen) The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request. I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application. Is there a way to do this?

    Read the article

  • Asp.Net MVC2 Clientside Validation problem with controls with prefixes

    - by alexander
    The problem is: when I put 2 controls of the same type on a page I need to specify different prefixes for binding. In this case the validation rules generated right after the form are incorrect. So how to get client validation work for the case?: the page contains: <% Html.RenderPartial(ViewLocations.Shared.PhoneEditPartial, new PhoneViewModel { Phone = person.PhonePhone, Prefix = "PhonePhone" }); Html.RenderPartial(ViewLocations.Shared.PhoneEditPartial, new PhoneViewModel { Phone = person.FaxPhone, Prefix = "FaxPhone" }); %> the control ViewUserControl<PhoneViewModel>: <%= Html.TextBox(Model.GetPrefixed("CountryCode"), Model.Phone.CountryCode) %> <%= Html.ValidationMessage("Phone.CountryCode", new { id = Model.GetPrefixed("CountryCode"), name = Model.GetPrefixed("CountryCode") })%> where Model.GetPrefixed("CountryCode") just returns "FaxPhone.CountryCode" or "PhonePhone.CountryCode" depending on prefix And here is the validation rules generated after the form. They are duplicated for the field name "Phone.CountryCode". While the desired result is 2 rules (required, number) for each of the FieldNames "FaxPhone.CountryCode", "PhonePhone.CountryCode" The question is somewhat duplicate of http://stackoverflow.com/questions/2675606/asp-net-mvc2-clientside-validation-and-duplicate-ids-problem but the advise to manually generate ids doesn't helps.

    Read the article

  • Time complexity for Search and Insert operation in sorted and unsorted arrays that includes duplicat

    - by iecut
    1-)For sorted array I have used Binary Search. We know that the worst case complexity for SEARCH operation in sorted array is O(lg N), if we use Binary Search, where N are the number of items in an array. What is the worst case complexity for the search operation in the array that includes duplicate values, using binary search?? Will it be the be the same O(lg N)?? Please correct me if I am wrong!! Also what is the worst case for INSERT operation in sorted array using binary search?? My guess is O(N).... is that right?? 2-) For unsorted array I have used Linear search. Now we have an unsorted array that also accepts duplicate element/values. What are the best worst case complexity for both SEARCH and INSERT operation. I think that we can use linear search that will give us O(N) worst case time for both search and delete operations. Can we do better than this for unsorted array and does the complexity changes if we accepts duplicates in the array.

    Read the article

  • How to exclude R*.class files from a proguard build

    - by Jeremy Bell
    I am one step away from making the method described here: http://stackoverflow.com/questions/2761443/targeting-android-with-scala-2-8-trunk-builds work with a single project (vs one project for scala and one for android). I've come across a problem. Using this input file (arguments to) proguard: -injars bin;lib/scala-library.jar(!META-INF/MANIFEST.MF,!library.properties) -outjar lib/scandroid.jar -libraryjars lib/android.jar -dontwarn -dontoptimize -dontobfuscate -dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers -keepattributes Exceptions,InnerClasses,Signature,Deprecated, SourceFile,LineNumberTable,*Annotation*,EnclosingMethod -keep public class org.scala.jeb.** { public protected *; } -keep public class org.xml.sax.EntityResolver { public protected *; } Proguard successfully builds scandroid.jar, however it appears to have included the generated R classes that the android resource builder generates and compiles. In this case, they are located in bin/org/jeb/R*.class. This is not what I want. The android dalvik converter cannot build because it thinks there is a duplicate of the R class (it's in scandroid and also the R*.class files). How can I modify the above proguard arguments to exclude the R*.class files from the scandroid.jar so the dalvik converter is happy? Edit: I should note that I tried adding ;bin/org/jeb/R.class;etc... to the -libraryjars argument, and that only seemed to cause it to complain about duplicate classes, and in addition proguard decided to exclude my scala class files too.

    Read the article

  • How do I manage application configuration in ASP.NET?

    - by GlennS
    I am having difficulty with managing configuration of an ASP.Net application to deploy for different clients. The sheer volume of different settings which need twiddling takes up large amounts of time, and the current configuration methods are too complicated to enable us to push this responsibility out to support partners. Any suggestions for better methods to handle this or good sources of information to research? How we do things at present: Various xml configuration files which are referenced in Web.Config, for example an AppSettings.xml. Configurations for specific sites are kept in duplicate configuration files. Text files containing lists of data specific to the site In some cases, manual one-off changes to the database C# configuration for Windsor IOC. The specific issues we are having: Different sites with different features enabled, different external services we have to talk to and different business rules. Different deployment types (live, test, training) Configuration keys change across versions (get added, remove), meaning we have to update all the duplicate files We still need to be able to alter keys while the application is running Our current thoughts on how we might approach this are: Move the configuration into dynamically compiled code (possibly Boo, Binsor or JavaScript) Have some form of diffing/merging configuration: combine a default config with a live/test/training config and a site-specific config

    Read the article

  • Conceal packet loss in PCM stream

    - by ZeroDefect
    I am looking to use 'Packet Loss Concealment' to conceal lost PCM frames in an audio stream. Unfortunately, I cannot find a library that is accessible without all the licensing restrictions and code bloat (...up for some suggestions though). I have located some GPL code written by Steve Underwood for the Asterisk project which implements PLC. There are several limitations; although, as Steve suggests in his code, his algorithm can be applied to different streams with a bit of work. Currently, the code works with 8kHz 16-bit signed mono streams. Variations of the code can be found through a simple search of Google Code Search. My hope is that I can adapt the code to work with other streams. Initially, the goal is to adjust the algorithm for 8+ kHz, 16-bit signed, multichannel audio (all in a C++ environment). Eventually, I'm looking to make the code available under the GPL license in hopes that it could be of benefit to others... Attached is the code below with my efforts. The code includes a main function that will "drop" a number of frames with a given probability. Unfortunately, the code does not quite work as expected. I'm receiving EXC_BAD_ACCESS when running in gdb, but I don't get a trace from gdb when using 'bt' command. Clearly, I'm trampimg on memory some where but not sure exactly where. When I comment out the *amdf_pitch* function, the code runs without crashing... int main (int argc, char *argv[]) { std::ifstream fin("C:\\cc32kHz.pcm"); if(!fin.is_open()) { std::cout << "Failed to open input file" << std::endl; return 1; } std::ofstream fout_repaired("C:\\cc32kHz_repaired.pcm"); if(!fout_repaired.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } std::ofstream fout_lossy("C:\\cc32kHz_lossy.pcm"); if(!fout_lossy.is_open()) { std::cout << "Failed to open output repaired file" << std::endl; return 1; } audio::PcmConcealer Concealer; Concealer.Init(1, 16, 32000); //Generate random numbers; srand( time(NULL) ); int value = 0; int probability = 5; while(!fin.eof()) { char arr[2]; fin.read(arr, 2); //Generate's random number; value = rand() % 100 + 1; if(value <= probability) { char blank[2] = {0x00, 0x00}; fout_lossy.write(blank, 2); //Fill in data; Concealer.Fill((int16_t *)blank, 1); fout_repaired.write(blank, 2); } else { //Write data to file; fout_repaired.write(arr, 2); fout_lossy.write(arr, 2); Concealer.Receive((int16_t *)arr, 1); } } fin.close(); fout_repaired.close(); fout_lossy.close(); return 0; } PcmConcealer.hpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #ifndef __PCMCONCEALER_HPP__ #define __PCMCONCEALER_HPP__ /** 1. What does it do? The packet loss concealment module provides a suitable synthetic fill-in signal, to minimise the audible effect of lost packets in VoIP applications. It is not tied to any particular codec, and could be used with almost any codec which does not specify its own procedure for packet loss concealment. Where a codec specific concealment procedure exists, the algorithm is usually built around knowledge of the characteristics of the particular codec. It will, therefore, generally give better results for that particular codec than this generic concealer will. 2. How does it work? While good packets are being received, the plc_rx() routine keeps a record of the trailing section of the known speech signal. If a packet is missed, plc_fillin() is called to produce a synthetic replacement for the real speech signal. The average mean difference function (AMDF) is applied to the last known good signal, to determine its effective pitch. Based on this, the last pitch period of signal is saved. Essentially, this cycle of speech will be repeated over and over until the real speech resumes. However, several refinements are needed to obtain smooth pleasant sounding results. - The two ends of the stored cycle of speech will not always fit together smoothly. This can cause roughness, or even clicks, at the joins between cycles. To soften this, the 1/4 pitch period of real speech preceeding the cycle to be repeated is blended with the last 1/4 pitch period of the cycle to be repeated, using an overlap-add (OLA) technique (i.e. in total, the last 5/4 pitch periods of real speech are used). - The start of the synthetic speech will not always fit together smoothly with the tail of real speech passed on before the erasure was identified. Ideally, we would like to modify the last 1/4 pitch period of the real speech, to blend it into the synthetic speech. However, it is too late for that. We could have delayed the real speech a little, but that would require more buffer manipulation, and hurt the efficiency of the no-lost-packets case (which we hope is the dominant case). Instead we use a degenerate form of OLA to modify the start of the synthetic data. The last 1/4 pitch period of real speech is time reversed, and OLA is used to blend it with the first 1/4 pitch period of synthetic speech. The result seems quite acceptable. - As we progress into the erasure, the chances of the synthetic signal being anything like correct steadily fall. Therefore, the volume of the synthesized signal is made to decay linearly, such that after 50ms of missing audio it is reduced to silence. - When real speech resumes, an extra 1/4 pitch period of sythetic speech is blended with the start of the real speech. If the erasure is small, this smoothes the transition. If the erasure is long, and the synthetic signal has faded to zero, the blending softens the start up of the real signal, avoiding a kind of "click" or "pop" effect that might occur with a sudden onset. 3. How do I use it? Before audio is processed, call plc_init() to create an instance of the packet loss concealer. For each received audio packet that is acceptable (i.e. not including those being dropped for being too late) call plc_rx() to record the content of the packet. Note this may modify the packet a little after a period of packet loss, to blend real synthetic data smoothly. When a real packet is not available in time, call plc_fillin() to create a sythetic substitute. That's it! */ /*! Minimum allowed pitch (66 Hz) */ #define PLC_PITCH_MIN(SAMPLE_RATE) ((double)(SAMPLE_RATE) / 66.6) /*! Maximum allowed pitch (200 Hz) */ #define PLC_PITCH_MAX(SAMPLE_RATE) ((SAMPLE_RATE) / 200) /*! Maximum pitch OLA window */ //#define PLC_PITCH_OVERLAP_MAX(SAMPLE_RATE) ((PLC_PITCH_MIN(SAMPLE_RATE)) >> 2) /*! The length over which the AMDF function looks for similarity (20 ms) */ #define CORRELATION_SPAN(SAMPLE_RATE) ((20 * (SAMPLE_RATE)) / 1000) /*! History buffer length. The buffer must also be at leat 1.25 times PLC_PITCH_MIN, but that is much smaller than the buffer needs to be for the pitch assessment. */ //#define PLC_HISTORY_LEN(SAMPLE_RATE) ((CORRELATION_SPAN(SAMPLE_RATE)) + (PLC_PITCH_MIN(SAMPLE_RATE))) namespace audio { typedef struct { /*! Consecutive erased samples */ int missing_samples; /*! Current offset into pitch period */ int pitch_offset; /*! Pitch estimate */ int pitch; /*! Buffer for a cycle of speech */ float *pitchbuf;//[PLC_PITCH_MIN]; /*! History buffer */ short *history;//[PLC_HISTORY_LEN]; /*! Current pointer into the history buffer */ int buf_ptr; } plc_state_t; class PcmConcealer { public: PcmConcealer(); ~PcmConcealer(); void Init(int channels, int bit_depth, int sample_rate); //Process a block of received audio samples. int Receive(short amp[], int frames); //Fill-in a block of missing audio samples. int Fill(short amp[], int frames); void Destroy(); private: int amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames); void save_history(plc_state_t *s, short *buf, int channel_index, int frames); void normalise_history(plc_state_t *s); /** Holds the states of each of the channels **/ std::vector< plc_state_t * > ChannelStates; int plc_pitch_min; int plc_pitch_max; int plc_pitch_overlap_max; int correlation_span; int plc_history_len; int channel_count; int sample_rate; bool Initialized; }; } #endif PcmConcealer.cpp /* * Code adapted from Steve Underwood of the Asterisk Project. This code inherits * the same licensing restrictions as the Asterisk Project. */ #include "audio/PcmConcealer.hpp" /* We do a straight line fade to zero volume in 50ms when we are filling in for missing data. */ #define ATTENUATION_INCREMENT 0.0025 /* Attenuation per sample */ #if !defined(INT16_MAX) #define INT16_MAX (32767) #define INT16_MIN (-32767-1) #endif #ifdef WIN32 inline double rint(double x) { return floor(x + 0.5); } #endif inline short fsaturate(double damp) { if (damp > 32767.0) return INT16_MAX; if (damp < -32768.0) return INT16_MIN; return (short)rint(damp); } namespace audio { PcmConcealer::PcmConcealer() : Initialized(false) { } PcmConcealer::~PcmConcealer() { Destroy(); } void PcmConcealer::Init(int channels, int bit_depth, int sample_rate) { if(Initialized) return; if(channels <= 0 || bit_depth != 16) return; Initialized = true; channel_count = channels; this->sample_rate = sample_rate; ////////////// double min = PLC_PITCH_MIN(sample_rate); int imin = (int)min; double max = PLC_PITCH_MAX(sample_rate); int imax = (int)max; plc_pitch_min = imin; plc_pitch_max = imax; plc_pitch_overlap_max = (plc_pitch_min >> 2); correlation_span = CORRELATION_SPAN(sample_rate); plc_history_len = correlation_span + plc_pitch_min; ////////////// for(int i = 0; i < channel_count; i ++) { plc_state_t *t = new plc_state_t; memset(t, 0, sizeof(plc_state_t)); t->pitchbuf = new float[plc_pitch_min]; t->history = new short[plc_history_len]; ChannelStates.push_back(t); } } void PcmConcealer::Destroy() { if(!Initialized) return; while(ChannelStates.size()) { plc_state_t *s = ChannelStates.at(0); if(s) { if(s->history) delete s->history; if(s->pitchbuf) delete s->pitchbuf; memset(s, 0, sizeof(plc_state_t)); delete s; } ChannelStates.erase(ChannelStates.begin()); } ChannelStates.clear(); Initialized = false; } //Process a block of received audio samples. int PcmConcealer::Receive(short amp[], int frames) { if(!Initialized) return 0; int j = 0; for(int k = 0; k < ChannelStates.size(); k++) { int i; int overlap_len; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples) { /* Although we have a real signal, we need to smooth it to fit well with the synthetic signal we used for the previous block */ /* The start of the real data is overlapped with the next 1/4 cycle of the synthetic data. */ pitch_overlap = s->pitch >> 2; if (pitch_overlap > frames) pitch_overlap = frames; gain = 1.0 - s->missing_samples * ATTENUATION_INCREMENT; if (gain < 0.0) gain = 0.0; new_step = 1.0/pitch_overlap; old_step = new_step*gain; new_weight = new_step; old_weight = (1.0 - new_step)*gain; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->pitchbuf[s->pitch_offset] + new_weight * amp[index]); if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->missing_samples = 0; } save_history(s, amp, j, frames); j++; } return frames; } //Fill-in a block of missing audio samples. int PcmConcealer::Fill(short amp[], int frames) { if(!Initialized) return 0; int j =0; for(int k = 0; k < ChannelStates.size(); k++) { short *tmp = new short[plc_pitch_overlap_max]; int i; int pitch_overlap; float old_step; float new_step; float old_weight; float new_weight; float gain; short *orig_amp; int orig_len; orig_amp = amp; orig_len = frames; plc_state_t *s = ChannelStates.at(k); if (s->missing_samples == 0) { // As the gap in real speech starts we need to assess the last known pitch, //and prepare the synthetic data we will use for fill-in normalise_history(s); s->pitch = amdf_pitch(plc_pitch_min, plc_pitch_max, s->history + plc_history_len - correlation_span - plc_pitch_min, j, correlation_span); // We overlap a 1/4 wavelength pitch_overlap = s->pitch >> 2; // Cook up a single cycle of pitch, using a single of the real signal with 1/4 //cycle OLA'ed to make the ends join up nicely // The first 3/4 of the cycle is a simple copy for (i = 0; i < s->pitch - pitch_overlap; i++) s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]; // The last 1/4 of the cycle is overlapped with the end of the previous cycle new_step = 1.0/pitch_overlap; new_weight = new_step; for ( ; i < s->pitch; i++) { s->pitchbuf[i] = s->history[plc_history_len - s->pitch + i]*(1.0 - new_weight) + s->history[plc_history_len - 2*s->pitch + i]*new_weight; new_weight += new_step; } // We should now be ready to fill in the gap with repeated, decaying cycles // of what is in pitchbuf // We need to OLA the first 1/4 wavelength of the synthetic data, to smooth // it into the previous real data. To avoid the need to introduce a delay // in the stream, reverse the last 1/4 wavelength, and OLA with that. gain = 1.0; new_step = 1.0/pitch_overlap; old_step = new_step; new_weight = new_step; old_weight = 1.0 - new_step; for (i = 0; i < pitch_overlap; i++) { int index = (i * channel_count) + j; amp[index] = fsaturate(old_weight * s->history[plc_history_len - 1 - i] + new_weight * s->pitchbuf[i]); new_weight += new_step; old_weight -= old_step; if (old_weight < 0.0) old_weight = 0.0; } s->pitch_offset = i; } else { gain = 1.0 - s->missing_samples*ATTENUATION_INCREMENT; i = 0; } for ( ; gain > 0.0 && i < frames; i++) { int index = (i * channel_count) + j; amp[index] = s->pitchbuf[s->pitch_offset]*gain; gain -= ATTENUATION_INCREMENT; if (++s->pitch_offset >= s->pitch) s->pitch_offset = 0; } for ( ; i < frames; i++) { int index = (i * channel_count) + j; amp[i] = 0; } s->missing_samples += orig_len; save_history(s, amp, j, frames); delete [] tmp; j++; } return frames; } void PcmConcealer::save_history(plc_state_t *s, short *buf, int channel_index, int frames) { if (frames >= plc_history_len) { /* Just keep the last part of the new data, starting at the beginning of the buffer */ //memcpy(s->history, buf + len - plc_history_len, sizeof(short)*plc_history_len); int frames_to_copy = plc_history_len; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + frames - plc_history_len)) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = 0; return; } if (s->buf_ptr + frames > plc_history_len) { /* Wraps around - must break into two sections */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*(plc_history_len - s->buf_ptr)); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = plc_history_len - s->buf_ptr; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } frames -= (plc_history_len - s->buf_ptr); //memcpy(s->history, buf + (plc_history_len - s->buf_ptr), sizeof(short)*len); frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * (i + (plc_history_len - s->buf_ptr))) + channel_index; s->history[i] = buf[index]; } s->buf_ptr = frames; return; } /* Can use just one section */ //memcpy(s->history + s->buf_ptr, buf, sizeof(short)*len); short *hist_ptr = s->history + s->buf_ptr; int frames_to_copy = frames; for(int i = 0; i < frames_to_copy; i ++) { int index = (channel_count * i) + channel_index; hist_ptr[i] = buf[index]; } s->buf_ptr += frames; } void PcmConcealer::normalise_history(plc_state_t *s) { short *tmp = new short[plc_history_len]; if (s->buf_ptr == 0) return; memcpy(tmp, s->history, sizeof(short)*s->buf_ptr); memcpy(s->history, s->history + s->buf_ptr, sizeof(short)*(plc_history_len - s->buf_ptr)); memcpy(s->history + plc_history_len - s->buf_ptr, tmp, sizeof(short)*s->buf_ptr); s->buf_ptr = 0; delete [] tmp; } int PcmConcealer::amdf_pitch(int min_pitch, int max_pitch, short amp[], int channel_index, int frames) { int i; int j; int acc; int min_acc; int pitch; pitch = min_pitch; min_acc = INT_MAX; for (i = max_pitch; i <= min_pitch; i++) { acc = 0; for (j = 0; j < frames; j++) { int index1 = (channel_count * (i+j)) + channel_index; int index2 = (channel_count * j) + channel_index; //std::cout << "Index 1: " << index1 << ", Index 2: " << index2 << std::endl; acc += abs(amp[index1] - amp[index2]); } if (acc < min_acc) { min_acc = acc; pitch = i; } } std::cout << "Pitch: " << pitch << std::endl; return pitch; } } P.S. - I must confess that digital audio is not my forte...

    Read the article

  • How do you think while formulating Sql Queries. Is it an experience or a concept ?

    - by Shantanu Gupta
    I have been working on sql server and front end coding and have usually faced problem formulating queries. I do understand most of the concepts of sql that are needed in formulating queries but whenever some new functionality comes into the picture that can be dont using sql query, i do usually fails resolving them. I am very comfortable with select queries using joins and all such things but when it comes to DML operation i usually fails For every query that i never done before I usually finds uncomfortable with that while creating them. Whenever I goes for an interview I usually faces this problem. Is it their some concept behind approaching on formulating sql queries. Eg. I need to create an sql query such that A table contain single column having duplicate record. I need to remove duplicate records. I know i can find the solution to this query very easily on Googling, but I want to know how everyone comes to the desired result. Is it something like Practice Makes Man Perfect i.e. once you did it, next time you will be able to formulate or their is some logic or concept behind. I could have get my answer of solving above problem simply by posting it on stackoverflow and i would have been with an answer within 5 to 10 minutes but I want to know the reason. How do you work on any new kind of query. Is it a major contribution of experience or some an implementation of concepts. Whenever I learns some new thing in coding section I tries to utilize it wherever I can use it. But here scenario seems to be changed because might be i am lagging in some concepts.

    Read the article

  • Are AJAX sites crawlable by search engines?

    - by frankadelic
    I had always assumed that AJAX-driven content was invisible to search engines. (i.e. content inserted into the DOM via XMLHTTPRequest) For example, in this site, the main content is loaded via AJAX request by the browser: http://www.trustedsource.org/query/terra.cl ...if you view this page with Javascript disabled, the main content area is blank. However, Google cache shows the full content after the AJAX load: http://74.125.155.132/search?q=cache:JqcT6EVDHBoJ:www.trustedsource.org/query/terra.cl+http://www.trustedsource.org/query/terra.cl&cd=1&hl=en&ct=clnk&gl=us So, apparently search engines do index content loaded by AJAX. Questions: Is this a new feature in search engines? Most postings on the web indicate that you have to publish duplicate static HTML content for search engines to find them. Are there any tricks to get an AJAX-driven content to be crawled by search engines (besides creating duplicate static HTML content). Will the AJAX-driven content be indexed if it is loaded from a separate subdomain? How about a separate domain?

    Read the article

< Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >