Search Results

Search found 6945 results on 278 pages for 'azure use cases'.

Page 227/278 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Determine the folder of a SAS source file

    - by exhuma
    When I open a SAS file in enterprise guide and run it, it is executed on the server. The source file itself is located either on the production site or the development site. In both cases, it is executed the same server however. I want to be able to tell my script to store results in a relative folder. But if I write something like libname lib_out xport "..\tmp\foobar.xpt"; I get an error, because the working folder of the SAS Enterprise Guide process is not the location of my source file, but a folder on the server. And the folder ..\tmp does not exist there. Even if it would, the server process does not have write permission in that folder. I would like to determine from which folder the .sas file was loaded and set the working folder accordingly. In one case it's S:\Development\myproject\sas\foobar.sas and in the other case it's S:\Production\myproject\sas\foobar.sas It this possible at all? Or how would you do this?

    Read the article

  • Are function-local typedefs visible inside C++0x lambdas?

    - by GMan - Save the Unicorns
    I've run into a strange problem. The following simplified code reproduces the problem in MSVC 2010 Beta 2: template <typename T> struct dummy { static T foo(void) { return T(); } }; int main(void) { typedef dummy<bool> dummy_type; auto x = [](void){ bool b = dummy_type::foo(); }; // auto x = [](void){ bool b = dummy<bool>::foo(); }; // works } The typedef I created locally in the function doesn't seem to be visible in the lambda. If I replace the typedef with the actual type, it works as expected. Here are some other test cases: // crashes the compiler, credit to Tarydon int main(void) { struct dummy {}; auto x = [](void){ dummy d; }; } // works as expected int main(void) { typedef int integer; auto x = [](void){ integer i = 0; }; } I don't have g++ 4.5 available to test it, right now. Is this some strange rule in C++0x, or just a bug in the compiler? From the results above, I'm leaning towards bug. Though the crash is definitely a bug. For now, I have filed two bug reports. All code snippets above should compile. The error has to do with using the scope resolution on locally defined scopes. (Spotted by dvide.) And the crash bug has to do with... who knows. :) Update According to the bug reports, they have both been fixed for the next release of Visual Studio 2010.

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • Incorrect logic flow? function that gets coordinates for a sudoku game

    - by igor
    This function of mine keeps on failing an autograder, I am trying to figure out if there is a problem with its logic flow? Any thoughts? Basically, if the row is wrong, "invalid row" should be printed, and clearInput(); called, and return false. When y is wrong, "invalid column" printed, and clearInput(); called and return false. When both are wrong, only "invalid row" is to be printed (and still clearInput and return false. Obviously when row and y are correct, print no error and return true. My function gets through most of the test cases, but fails towards the end, I'm a little lost as to why. bool getCoords(int & x, int & y) { char row; bool noError=true; cin>>row>>y; row=toupper(row); if(row>='A' && row<='I' && isalpha(row) && y>=1 && y<=9) { x=row-'A'; y=y-1; return true; } else if(!(row>='A' && row<='I')) { cout<<"Invalid row"<<endl; noError=false; clearInput(); return false; } else { if(noError) { cout<<"Invalid column"<<endl; } clearInput(); return false; } }

    Read the article

  • MS Word opens documents hosted on WebDav share read-only on Windows Vista and 7 but only if no other

    - by rjmunro
    We have a WebDav server with some Word documents on it. (We are using PHP's HTTP_WebDAV_Server but get the same issue on tests with Apache mod_dav - both use digest authentication, basic auth doesn't work on Vista or later) We have a web page that opens the word documents using javascript like: Doc = new ActiveXObject("Sharepoint.OpenDocuments.3"); Doc.EditDocument(url, 'Word.Document'); which causes word to connect to the webdav server and open the document, bypassing IE and most of windows built in WebDav client. On Windows XP, this works perfectly, and (after prompting you to log in) allows you to edit the word document and save it back to the server. On Windows 7 and Windows Vista, this usually opens the document read only, but not in all cases. After quite a bit of trial and error, we found that it worked (i.e. opened read/write) if Explorer happened to be already connected to a WebDav server. Note that this works with any Webdav server, not neccesarily the one with the document that you are trying to edit. So other than telling our users to change settings on their machine, is there anything we can do in the javascript sharepoint call, or on the WebDav server that will fix this issue. Ps. We have the same problem when launching Word from an HTA file version of our system, with javascript like: wordApp = new ActiveXObject("Word.application"); wordApp = new ActiveXObject("Word.application"); wordApp.Visible = true; doc = wordApp.Documents.Open(url); Pps. Sorry if you think this question should be on Serverfault (or even SuperUser). I couldn't decide, but because we are programming the WebDav server ourself (in PHP) and I have more rep on this site than the others, I decided to post it here :-)

    Read the article

  • asp.net mvc stand alone ascx control how do i link (css and js) most efficiently

    - by Julian
    Hi, I need some advice. I have developed some asp.net mvc web pages. Each page has a master and some ascx controls (between 2 - 6) embedded into it a js and css file. Up to now every thing was fine. In order to improve modularity, flexibility and testability the ascx's are now expected to be able to work as stand alone controls. (Each ascx has also got its own css and js files in some cases it has another control inside it) In order to meet this requirement we call the controller with the relevant parameters and it returns the ascx (partial) directly to the browser without all of the other parts of the original page . In order to get it to display correctly (css) and act correctly (js/jquery) all of the relevant files need to be added (as links or scripts eg. href="<%= ResolveUrl(styleSheet)%>") to the user control. This is "contradicting" the concept of positioning the files at the most logical place (could be the master page for example). How can I overcome this problem? Keep in mind that this is relevant for each "control" ascx file. Any thoughts will be appreciated.

    Read the article

  • C++ interpreter conceptual problem

    - by Jan Wilkins
    I've built an interpreter in C++ for a language created by me. One main problem in the design was that I had two different types in the language: number and string. So I have to pass around a struct like: class myInterpreterValue { myInterpreterType type; int intValue; string strValue; } Objects of this class are passed around million times a second during e.g.: a countdown loop in my language. Profiling pointed out: 85% of the performance is eaten by the allocation function of the string template. This is pretty clear to me: My interpreter has bad design and doesn't use pointers enough. Yet, I don't have an option: I can't use pointers in most cases as I just have to make copies. How to do something against this? Is a class like this a better idea? vector<string> strTable; vector<int> intTable; class myInterpreterValue { myInterpreterType type; int locationInTable; } So the class only knows what type it represents and the position in the table This however again has disadvantages: I'd have to add temporary values to the string/int vector table and then remove them again, this would eat a lot of performance again. Help, how do interpreters of languages like Python or Ruby do that? They somehow need a struct that represents a value in the language like something that can either be int or string.

    Read the article

  • Multi threading in WCF RIA Services

    - by synergetic
    I use WCF RIA Services to update customer database. In domain service: public void UpdateCustomer(Customer customer) { this.ObjectContext.Customers.AttachAsModified(customer); syncCustomer(customer); } After update, a database trigger launches and depending on the columns updated it may insert a new record in CustomerChange table. syncCustomer(customer) method is executed to check for a new record in the CustomerChange table and if found it will create a text file which contains customer information and forwards that file to external system for import. Now this synchronization may take a time so I wanted to execute it in different thread. So: private void syncCustomer(Customer customer) { this.ObjectContext.SaveChanges(); new Thread(() => syncCustomerInfo(customer.CustomerID)) { IsBackground = true }.Start(); } private void syncCustomerInfo(int customerID) { //Thread.Sleep(2000); //does real job here ... ... } The problem is in most cases syncCustomerInfo method cannot find any new CustomerChange record even if it was definitely there. If I force thread sleep then it finds a new record. I also looked Entity Framework events but the only event provided by object context is SavingChanges which occur before changes are saved. Please suggest me what else to try.

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • Linq to SQL code generator features

    - by Anders Abel
    I'm very fond of Linq to SQL and the programming model it encourages. I think that in many cases when you are in control of both the database schema and the code it is not worth the effort to have different relational and object models for the data. Working with Linq to SQL makes it simple to have type safe data access from .NET, using the partial extension methods to implement business rules. Unfortunately I do not like the dbml designer due to the lack of a schema refresh function. So far I have used SqlMetal, but that lacks the customization options of the dbml designer. Because of that I've started working on a tool which regenerates the whole code file like SqlMetal, but has the ability to do the customizations that are available in the dbml designer (and maybe more in the future). The customizations will be described in an xml file which only contains those parts that shouldn't have default values. This should keep the xml file size down as well as the maintenance burden of it. To help me focus on the right features, I would like to know: What would be your favourite feature in a linq to sql code generator?

    Read the article

  • How do I programatically verify, create, and update SQL table structure?

    - by JYelton
    Scenario: I have an application (C#) that expects a SQL database and login, which are set by a user. Once connected, it checks for the existence of several table and creates them if not found. I'd like to expand on this by having the program be capable of adding columns to those tables if I release a new version of the program which relies upon the new columns. Question: What is the best way to programatically check the structure of an existing SQL table and create or update it to match an expected structure? I am planning to iterate through the list of required columns and alter the existing table whenever it does not contain the new column. I can't help but wonder if there's an approach that is different or better. Criteria: Here are some of my expectations and self-imposed rules: Newer versions of the program might no longer use certain columns, but they would be retained for data logging purposes. In other words, no columns will be removed. Existing data in the table must be preserved, so the table cannot simply be dropped and recreated. In all cases, newly added columns would allow null data, so the population of old records is taken care of by having default null values. Example: Here is a sample table (because visual examples help!): id sensor_name sensor_status x1 x2 x3 x4 1 na019 OK 0.01 0.21 1.41 1.22 Then, in a new version, I may want to add the column x5. The "x-columns" are all data-storage columns that accept null.

    Read the article

  • How to get a list of all Subversion commit author usernames?

    - by Quinn Taylor
    I'm looking for an efficient way to get the list of unique commit authors for an SVN repository as a whole, or for a given resource path. I haven't been able to find an SVN command specifically for this (and don't expect one) but I'm hoping there may be a better way that what I've tried so far in Terminal (on OS X): svn log --quiet | grep "^r" | awk '{print $3}' svn log --quiet --xml | grep author | sed -E "s:</?author>::g" Either of these will give me one author name per line, but they both require filtering out a fair amount of extra information. They also don't handle duplicates of the same author name, so for lots of commits by few authors, there's tons of redundancy flowing over the wire. More often than not I just want to see the unique author usernames. (It actually might be handy to infer the commit count for each author on occasion, but even in these cases it would be better if the aggregated data were sent across instead.) I'm generally working with client-only access, so svnadmin commands are less useful, but if necessary, I might be able to ask a special favor of the repository admin if strictly necessary or much more efficient. The repositories I'm working with have tens of thousands of commits and many active users, and I don't want to inconvenience anyone.

    Read the article

  • Who's setting TCP window size down to 0, Indy or Windows?

    - by François
    We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site). We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput. And we would be able to act upon it becoming 0 (nothing gets send, users wait = no good). So, any info, link, pointer to Indy code are welcome... Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-) Note: it's Indy9/D2007 on Windows Server 2003 SP2. More gory details: The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know when that happen and have a way to do something (logging at least) in our code. So where to hook (in Indy?) to know when that condition occurs?

    Read the article

  • In Ruby, why is a method invocation not be able to be treated as a unit when "do" and "end" is used?

    - by Jian Lin
    The following question is related to the question "Ruby Print Inject Do Syntax". My question is, can we insist on using do and end and make it work with puts or p? This works: a = [1,2,3,4] b = a.inject do |sum, x| sum + x end puts b # prints out 10 so, is it correct to say, inject is a class method of the Array class, which takes a block of code, and then returns a number. If so, then it should be no different from calling a function and getting back a return value: b = foo(3) puts b or b = circle.getRadius() puts b In the above two cases, we can directly say puts foo(3) puts circle.getRadius() so, there is no way to make it work directly by using the following 2 ways: a = [1,2,3,4] puts a.inject do |sum, x| sum + x end but it gives ch01q2.rb:7:in `inject': no block given (LocalJumpError) from ch01q2.rb:4:in `each' from ch01q2.rb:4:in `inject' from ch01q2.rb:4 grouping the method call using ( ) doesn't work either: a = [1,2,3,4] puts (a.inject do |sum, x| sum + x end) and this gives: ch01q3.rb:4: syntax error, unexpected kDO_BLOCK, expecting ')' puts (a.inject do |sum, x| ^ ch01q3.rb:4: syntax error, unexpected '|', expecting '=' puts (a.inject do |sum, x| ^ ch01q3.rb:6: syntax error, unexpected kEND, expecting $end end) ^ finally, the following version works: a = [1,2,3,4] puts a.inject { |sum, x| sum + x } but why doesn't the grouping of the method invocation using ( ) work in the earlier example? What if a programmer insist that he uses do and end, can it be made to work?

    Read the article

  • When are predicates appropriate and what is the best pattern for usage

    - by Maxim Gershkovich
    When are predicates appropriate and what is the best pattern for usage? What are the advantages of predicates? It seems to me like most cases where a predicate can be employed a tight loop would accomplish the same functionality? I don’t see a reusability argument given you will probably only implement a predicate in one method right? They look and feel nice but besides that they seem like you would only employ them when you need a quick hack on the collection classes? UPDATE But why would you be rewriting the tight loop again and again? In my mind/code when it comes to collections I always end up with something like Class Person End Class Class PersonList Inherits List(Of Person) Function FindByName(Name) as Person tight loop.... End Function End Class @Ani By that same logic I could implement the method as such Class PersonList Inherits List(Of Person) Function FindByName(Name) as PersonList End Function Function FindByAge(Age) as PersonList End Function Function FindBySocialSecurityNumber(SocialSecurityNumber) as PersonList End Function End Class And call it as such Dim res as PersonList = MyList.FindByName("Max").FindByAge(25).FindBySocialSecurityNumber(1234) and the result along with the amount of code and its reusability is largely the same, no? I am not arguing just trying to understand.

    Read the article

  • How do I establish table association in JPA / Hibernate with existing database?

    - by Paperino
    Currently I have two tables in my database Encounters and Referrals: There is a one to many relationship between these two tables. Currently they are linked together with foreign keys. Right now I have public class Encounter extends JPASupport implements java.io.Serializable { @Column(name="referralid", unique=false, nullable=true, insertable=true, updatable=true) public Integer referralid; } But what I really want is public class Encounter extends JPASupport implements java.io.Serializable { .......... @OneToMany(cascade=CascadeType.PERSIST) public Set<Referrals> referral; ............ } So that I can eventually do a query like this: List<Encounter> cases = Encounter.find( "select distinct p from Encounter p join p.referrals as t where t.caseid =103" ).fetch(); How do I tell JPA that even though I have non-standard column names for my foreign keys and primary keys that its the object models that I want linked, not simply the integer value for the keys? Does this make sense? I hope so. Thanks in advanced!

    Read the article

  • Does unboxing just return a pointer to the value within the boxed object on the heap?

    - by Charles
    I this MSDN Magazine article, the author states (emphasis mine): Note that boxing always creates a new object and copies the unboxed value's bits to the object. On the other hand, unboxing simply returns a pointer to the data within a boxed object: no memory copy occurs. However, it is commonly the case that your code will cause the data pointed to by the unboxed reference to be copied anyway. I'm confused by the sentence I've bolded and the sentence that follows it. From everything else I've read, including this MSDN page, I've never before heard that unboxing just returns a pointer to the value on the heap. I was under the impression that unboxing would result in you having a variable containing a copy of the value on the stack, just as you began with. After all, if my variable contains "a pointer to the value on the heap", then I haven't got a value type, I've got a pointer. Can someone explain what this means? Was the author on crack? (There is at least one other glaring error in the article). And if this is true, what are the cases where "your code will cause the data pointed to by the unboxed reference to be copied anyway"? I just noticed that the article is nearly 10 years old, so maybe this is something that changed very early on in the life of .Net.

    Read the article

  • Better use a tuple or numpy array for storing coordinates

    - by Ivan
    Hi, I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind: 1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy array, both in memory and access time wise? 2) In some cases, these coordinates will be used to build a complex number, evaluated on a complex function, and the real part of this function will be used. Assuming that there is no way to separate real and complex parts of this function, and the real part will have to be used on the end, maybe is better to use directly complex numbers to store (x,y)? How bad is the overhead with the transformation from complex to real in python? The code in c++ does a lot of these transformations, and this is a big slowdown in that code. 3) Also some coordinates transformations will have to be performed, and for the coordinates the x and y values will be accessed in separate, the transformation be done, and the result returned. The coordinate transformations are defined in the complex plane, so is still faster to use the components x and y directly than relying on the complex variables? Thank you

    Read the article

  • Difference between float and double

    - by VaioIsBorn
    I know, i've read about the difference between double precision and single precision etc. But they should give the same results on most cases right ? I was solving a problem on a programming contest and there were calculations with floating point numbers that were not really big so i decided to use float instead of double, and i checked it - i was getting the correct results. But when i send the solution, it said only 1 of 10 tests was correct. I checked again and again, until i found that using float is not the same using double. I put double for the calculations and double for the output, and the program gave the SAME results, but this time it passed all the 10 tests correctly. I repeat, the output was the SAME, the results were the SAME, but putting float didn't work - only double. The values were not so big too, and the program gave the same results on the same tests both with float and double, but the online judge accepted only the double-provided solution. Why ? What is the difference ?

    Read the article

  • Could DataGridView be this dumb? or its me?lol

    - by Selase
    Am trying to bind data to a dropdown list on pageload based on a condition. Code explains further below. public partial class AddExhibit : System.Web.UI.Page { string adminID, caseIDRetrieved; DataSet caseDataSet = new DataSet(); SqlDataAdapter caseSqlDataAdapter = new SqlDataAdapter(); string strConn = WebConfigurationManager.ConnectionStrings["CMSSQL3ConnectionString1"].ConnectionString; protected void Page_Load(object sender, EventArgs e) { adminID = Request.QueryString["adminID"]; caseIDRetrieved = Request.QueryString["caseID"]; if (caseIDRetrieved != null) { CaseIDDropDownList.Text = caseIDRetrieved; //CaseIDDropDownList.Enabled = false; } else { try { CreateDataSet(); DataView caseDataView = new DataView(caseDataSet.Tables[0]); CaseIDDropDownList.DataSource = caseDataView; CaseIDDropDownList.DataBind(); } catch (Exception ex) { string script = "<script>alert('" + ex.Message + "');</script>"; } } } The CreateDataset method that is called in the if..else statement is contains the following code. private void CreateDataSet() { SqlConnection caseConnection = new SqlConnection(strConn); caseSqlDataAdapter.SelectCommand = new SqlCommand("Select CaseID FROM Cases", caseConnection); caseSqlDataAdapter.Fill(caseDataSet); } However when i load the page and as usual the condition that is supposed to bid the data is met, the gridview decides to displays as follows... IS IT ME OR ITS THE DATAGRID?...??

    Read the article

  • Disable validation in an object in Ruby on Rails

    - by J. Pablo Fernández
    I have an object which whether validation happens or not should depend on a boolean, or in another way, validation is optional. I haven't found a clean way to do it. What I'm currently doing is this (disclaimer: you cannot unsee, leave this page if you are too sensitive): def valid? if perform_validation super else super # Call valid? so that callbacks get called and things like encrypting passwords and generating salt in before_validation actually happen errors.clear # but then clear the errors true # and claim ourselves to be valid. This is super hacky! end end Any better ways? Before you point to the :if argument of many validations, this is for a user model which is using authlogic so it has a lot of validation rules. You can stop reading here if you belive me. If you don't, authlogic already sets some :ifs like: :if => :email_changed? which I have to turn into :if => Proc.new {|user| user.email_changed? and user.perform_validation} and in some other cases, since I'm also using authlogic-oid (OpenID) I just don't have control over the :if, authlogic-oid sets it in a way I cannot change it (in time) without further monkey patching. So I have to override seemingly unrelated functions, catch exceptions if a method doesn't exist, etc. The previous hacky solution if the best of my two attempts.

    Read the article

  • Silverlight - C# and LINQ - Ordering By Multiple Fields

    - by user70192
    Hello, I have an ObservableCollection of Task objects. Each Task has the following properties: AssignedTo Category Title AssignedDate I'm giving the user an interface to select which of these fields they was to sort by. In some cases, a user may want to sort by up to three of these properties. How do I dynamically build a LINQ statement that will allow me to sort by the selected fields in either ascending or descending order? Currently, I'm trying the following, but it appears to only sort by the last sort applied: var sortedTasks = from task in tasks select task; if (userWantsToSortByAssignedTo == true) { if (sortByAssignedToDescending == true) sortedTasks = sortedTasks.OrderByDescending(t => t.AssignedTo); else sortedTasks = sortedTasks.OrderBy(t => t.AssignedTo); } if (userWantsToSortByCategory == true) { if (sortByCategoryDescending == true) sortedTasks = sortedTasks.OrderByDescending(t => t.Category); else sortedTasks = sortedTasks.OrderBy(t => t.Category); } Is there an elegant way to dynamicaly append order clauses to a LINQ statement?

    Read the article

  • How do I avoid having JSONP returns cached in an HTML5 offline application?

    - by Kent Brewster
    I had good luck with cached offline apps until I tried including data from JSONP endpoints. Here's a tiny example, which loads a single movie from the new Netflix widget API: <!DOCTYPE html> <html manifest="main.manifest"> <head> <title>Testing!</title> </head> <body> <p>Attempting to recover a title from Netflix now...</p> <script type="text/javascript"> function ping(r) { alert('API reply: ' + r.catalog_title.title.regular); } var cb = new Date().getTime(); var s = document.createElement('SCRIPT'); s.src = 'http://movi.es/7Soq?v=2.0&output=json&expand=widget&callback=ping&cacheBuster=' + cb; alert('SCRIPT src: ' + s.src); s.type = 'text/javascript'; document.getElementsByTagName('BODY')[0].appendChild(s); </script> </body> </html> ... and here's the contents of my manifest, main.manifest, which contains no files and is only there so my browser knows to cache the calling HTML file. CACHE MANIFEST Yes, I've confirmed that my server is sending the manifest down with the correct content type, text/cache-manifest. The app works fine--meaning both alerts show--the first time I run it, but subsequent runs, even with the attempt at cache-busting in line 10, seem to be attempting to load the script from cache no matter what the query string is. I see the alert showing the script source, but the callback never fires. If I remove the manifest link from line 2 and reset my browser--being Safari and the iPhone Simulator--to clear cache, it works every time. I've also tried alerting the number of SCRIPT tags in the page, and it's definitely seeing both the existing and dynamically-created tag in all cases.

    Read the article

  • Hashtable is that fast

    - by Costa
    Hi s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]. Is the hash function of the java string, I assume the rest of languages is similar or close to this implementation. If we have hash-Table and a list of 50 elements. each element is 7 chars ABCDEF1, ABCDEF2, ABCDEF3..... ABCDEFn If each bucket of hashtable contains 5 strings (I think this function will make it one string per bucket, but let us assume it is 5). If we call col.Contains("ABCDEFn"); // will do 6 comparisons and discover the difference on the 7th. The hash-table will take around 70 operations (multiplication and additions) to get the hashcode and to compare with 5 strings in bucket. and BANG it found. For list it will take around 300 comparisons to find it. for the case that there is only 10 elements, the list will take around 70 operations but the Hashtable will take around 50 operations. and note that hashtable operations are more time consuming (it is multiplications). I conclude that HybirdDictionary in .Net probably is the best choice for that most cases that require Hashtable with unknown size, because it will let me use a list till the list becomes more than 10 elements. still need something like HashSet rather than a Dictionary of keys and values, I wonder why there is no HybirdSet!! So what do u think? Thanks

    Read the article

  • Algorithm for assigning a unique series of bits for each user?

    - by Mark
    The problem seems simple at first: just assign an id and represent that in binary. The issue arises because the user is capable of changing as many 0 bits to a 1 bit. To clarify, the hash could go from 0011 to 0111 or 1111 but never 1010. Each bit has an equal chance of being changed and is independent of other changes. What would you have to store in order to go from hash - user assuming a low percentage of bit tampering by the user? I also assume failure in some cases so the correct solution should have an acceptable error rate. I would an estimate the maximum number of bits tampered with would be about 30% of the total set. I guess the acceptable error rate would depend on the number of hashes needed and the number of bits being set per hash. I'm worried with enough manipulation the id can not be reconstructed from the hash. The question I am asking I guess is what safe guards or unique positioning systems can I use to ensure this happens.

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >