Search Results

Search found 373 results on 15 pages for 'joseph mills'.

Page 9/15 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Use php to zip large files

    - by Joseph
    Hi, I have a php form that has a bunch of checkboxes that all contain links to files. Once a user clicks on which checkboxes (files) they want, it then zips up the files and forces a download. I got a simple php zip force download to work, but when one of the files is huge or if someone lets say selects the whole list to zip up and download, my server errors out. I understand that I can increase the server size, but are there any other ways? Thanks!

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • mod rewritten directory shorten

    - by Joseph
    I can only find mod rewrite examples/tutorials for query's, so can someone help me with this. I would like this http://website.tld/Folder1/Folder2/Folder3/Folder4/Folder5/File.exten to be transformed into http://website.tld/Folder4/File.exten Folder4 and Folder5 are multiple directories, while Folder 1-3 stay the same. Also File.exten also should be changeable in the rewrite. thanks.

    Read the article

  • Rails, if instance is in a scope?

    - by Joseph Silvashy
    I using rails 3 and I can't seem to check if a given instance is in a scope, see here: p = Post.find 6 +----+----------+-------------------------+-------------------------+-------------------------+-----------+ | id | title | publish_date | created_at | updated_at | published | +----+----------+-------------------------+-------------------------+-------------------------+-----------+ | 6 | asfdfdsa | 2010-03-28 22:33:00 UTC | 2010-03-28 22:33:46 UTC | 2010-03-28 22:33:46 UTC | true | +----+----------+-------------------------+-------------------------+-------------------------+-----------+ I have a menu scope which looks like: scope :menu, where("published != ?", false).limit(4) When I run it I get: Post.menu.all +----+------------------+------------------+------------------+-------------------+-----------+ | id | title | publish_date | created_at | updated_at | published | +----+------------------+------------------+------------------+-------------------+-----------+ | 1 | Lorem ipsum | 2010-03-23 07... | 2010-03-23 07... | 2010-03-28 21:... | true | | 2 | fdasf | 2010-03-28 21... | 2010-03-28 21... | 2010-03-28 21:... | true | | 3 | Ruby’s Imple... | 2010-03-28 21... | 2010-03-28 21... | 2010-03-28 21:... | true | | 4 | dsaD | 2010-03-28 22... | 2010-03-28 22... | 2010-03-28 22:... | true | +----+------------------+------------------+------------------+-------------------+-----------+ Which is correct, but if I try to check if p is in the the menu scope using: Post.menu.exists?(p) I get true when it should be false What is the proper way to find out if a given instance of something is in a scope?

    Read the article

  • question regarding rails framework code

    - by Joseph Misiti
    I noticed that the code in the rails framework is using the following convention all over the place: class SomeClass class << self def some function end end end rather than class SomeClass end def SomeClass.function end and class SomeClass def self.somefunction end end What is the reason for this design choice? They all seem to accomplish them same thing

    Read the article

  • Why does std::cout convert volatile pointers to bool?

    - by Joseph Garvin
    If you try to cout a volatile pointer, even a volatile char pointer where you would normally expect cout to print the string, you will instead simply get '1' (assuming the pointer is not null I think). I assume output stream operator<< is template specialized for volatile pointers, but my question is, why? What use case motivates this behavior? Example code: #include <iostream> #include <cstring> int main() { char x[500]; std::strcpy(x, "Hello world"); int y; int *z = &y; std::cout << x << std::endl; std::cout << (char volatile*)x << std::endl; std::cout << z << std::endl; std::cout << (int volatile*)z << std::endl; return 0; } Output: Hello world 1 0x8046b6c 1

    Read the article

  • How do you implement Software Transactional Memory?

    - by Joseph Garvin
    In terms of actual low level atomic instructions and memory fences (I assume they're used), how do you implement STM? The part that's mysterious to me is that given some arbitrary chunk of code, you need a way to go back afterward and determine if the values used in each step were valid. How do you do that, and how do you do it efficiently? This would also seem to suggest that just like any other 'locking' solution you want to keep your critical sections as small as possible (to decrease the probability of a conflict), am I right? Also, can STM simply detect "another thread entered this area while the computation was executing, therefore the computation is invalid" or can it actually detect whether clobbered values were used (and thus by luck sometimes two threads may execute the same critical section simultaneously without need for rollback)?

    Read the article

  • addNotify not works allright

    - by joseph
    Hello. I call addNotify() method in class that I posted here. The problem is, that when I call addNotify() as it is in the code, setKeys(objs) do nothing. Nothing appears in my explorer of running app. But when I call addNotify()without loop(for int....), and add only one item to ArrayList, it shows that one item correctly. Does anybody knows where can be problem? See the cede class ProjectsNode extends Children.Keys{ private ArrayList objs = new ArrayList(); public ProjectsNode() { } @Override protected Node[] createNodes(Object o) { MainProject obj = (MainProject) o; AbstractNode result = new AbstractNode (new DiagramsNode(), Lookups.singleton(obj)); result.setDisplayName (obj.getName()); return new Node[] { result }; } @Override protected void addNotify() { //this loop causes nothing appears in my explorer. //but when I replace this loop by single line "objs.add(new MainProject("project1000"));", it shows that one item in explorer for (int i=0;i==10;i++){ objs.add(new MainProject("project1000")); } setKeys (objs); } }

    Read the article

  • Custom MembershipProvider attempts to pass empty creds after IIS restart

    - by Joseph DeCarlo
    I have a C# custom ASP.Net MembershipProvider. When the user attempts to navigate to another part of the site after IIS is restarted, it doesn't navigate to the login page to collect credentials, but instead attempts to authenticate with empty credentials. Can anyone tell me what I have to do to identify that the new authentication needs to take place and that new creds need to be gathered? I have a complementary custom IHttpModule implementation that allows me to intercept events like BeginRequest and AuthenticateRequest, if that helps.

    Read the article

  • Rails "Load more..." instead of pagination.

    - by Joseph Silvashy
    I have a list of elements, and I've been using will_paginate up until now, but I'd like to have something like "load more..." at the bottom of the list. Is there an easy way to accomplish this using will_paginate or do I need to resort to some other method here? From what I know this is a better route anyhow because then I don't need a SQL count of the records. And it really doesn't matter if there are like 9,847 pages, nobody would need the records beyond the first couple pages anyhow.

    Read the article

  • how to create internal frame in netbeans platform?

    - by joseph
    I created class NewProject extends JInternalFrame. Then I create New...Action named "NEW", localised in File menu. I put code NewProject p = new NewProject(); p.setVisible(true); to the ActionPerformed method of the action. But when I run the module and click "NEW" in file menu, nothing appears. Where can be problem?

    Read the article

  • Save simple data in Magento's DB w/o Model

    - by Joseph Mastey
    I'm looking to save some data in the Magento database without hassling with creating a new EAV object (or even a DB table if I can avoid it). Is there any place that you all know about that Magento will let you store serialized data? If it matters, the data is a serialized set of SKUs that I need to retrieve. I know that I could create a new model, or possibly even create an attribute as a flag on each product, but those are both really overkill for my purposes. Thanks, Joe

    Read the article

  • html source encode

    - by Joseph
    when I view source on my php page I get &quot; for a quote. But instead, I would like " to be used in the source code. I have no control over manually replacing it so Im wondering if there is a function to do such a thing.

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • Transpose a Collection

    - by Joseph Melettukunnel
    Hello, I've a list of different sizes of a T-Shirt, e.g. S, M, L. Since this might change for T-Shirts (sometimes we just have e.g. M, L), we load this into a List sizes. Since most DataGrids (xamDataGrid, WPF Toolkit DataGrid) need Properties for binding to the Columns, I'd like to transpose somehow my data. Does anyone have an idea how to do this? E.g. Instead of having List where Size { string sizeName, int available, int defect, int ordered} Avail. Defect Ordered [S] 1 2 3 [M] 1 2 3 [L] 1 2 3 I want an Object which has the Properties S, M, L containing the Values like this: [S] [M] [L] Avail. 1 2 3 Defect 1 2 3 Ordered 1 2 3 The problem here is that I don't know how many sizes will be available for the tshirt, it might be 3, 4, or 10. Thanks for any help Cheers PS: Here is a mockup of how the final grid should look like http://img39.imageshack.us/img39/9161/multirowspangridfixedel.png

    Read the article

  • Tracking upstream svn changes with git-svn and github?

    - by Joseph Turian
    How do I track upstream SVN changes using git-svn and github? I used git-svn to convert an SVN repo to git on github: $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master I then made a few changes in my git repo, committed, and pushed to github. Now, I am on a new machine. I want to take upstream SVN changes, merge them with my github repo, and push them to my github repo. This documentation says: "If you ever lose your local copy, just run the import again with the same settings, and you’ll get another working directory with all the necessary SVN metainfo." So I did the following. But none of the commands work as desired. How do I track upstream SVN changes using git-svn and github? What am I doing wrong? $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master To [email protected]:turian/osqa.git ! [rejected] master -> master (non-fast forward) error: failed to push some refs to '[email protected]:turian/osqa.git' $ git pull remote: Counting objects: 21, done. remote: Compressing objects: 100% (17/17), done. remote: Total 17 (delta 7), reused 9 (delta 0) Unpacking objects: 100% (17/17), done. From [email protected]:turian/osqa * [new branch] master -> origin/master From [email protected]:turian/osqa * [new tag] master -> master You asked me to pull without telling me which branch you want to merge with, and 'branch.master.merge' in your configuration file does not tell me either. Please name which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details on the refspec. ... $ /usr//lib/git-core/git-svn rebase warning: refname 'master' is ambiguous. First, rewinding head to replay your work on top of it... Applying: Added forum/management/commands/dumpsettings.py error: Ref refs/heads/master is at 6acd747f95aef6d9bce37f86798a32c14e04b82e but expected a7109d94d813b20c230a029ecd67801e6067a452 fatal: Cannot lock the ref 'refs/heads/master'. Could not move back to refs/heads/master rebase refs/remotes/trunk: command returned error: 1

    Read the article

  • PHP get overridden methods from child class

    - by Joseph Mastey
    Given the following case: <?php class ParentClass { public $attrA; public $attrB; public $attrC; public function methodA() {} public function methodB() {} public function methodC() {} } class ChildClass { public $attrB; public function methodA() {} } How can I get a list of methods (and preferably class vars) that are overridden in ChildClass? Thanks, Joe

    Read the article

  • Do condition variables still need a mutex if you're changing the checked value atomically?

    - by Joseph Garvin
    Here is the typical way to use a condition variable: // The reader(s) lock(some_mutex); if(protected_by_mutex_var != desired_value) some_condition.wait(some_mutex); unlock(some_mutex); // The writer lock(some_mutex); protected_by_mutex_var = desired_value; unlock(some_mutex); some_condition.notify_all(); But if protected_by_mutex_var is set atomically by say, a compare-and-swap instruction, does the mutex serve any purpose (other than that pthreads and other APIs require you to pass in a mutex)? Is it protecting state used to implement the condition? If not, is it safe then to do this?: // The writer protected_by_mutex_var = desired_value; some_condition.notify_all(); With the writer never directly interacting with the reader's mutex? If so, is it even necessary that different readers use the same mutex?

    Read the article

  • test_case files in rails components

    - by Joseph Misiti
    i noticed there are a bunch of test_case.rb files delivered in the rails components: ./actionmailer-2.3.5/lib/action_mailer/test_case.rb ./actionpack-2.3.5/lib/action_controller/test_case.rb ./actionpack-2.3.5/lib/action_view/test_case.rb ./activerecord-2.3.5/lib/active_record/test_case.rb ./activesupport-2.3.5/lib/active_support/test_case.rb i am wondering how to execute these files. I cant seem to figure out how to do it?

    Read the article

  • WPF Border DesiredHeight

    - by Joseph Sturtevant
    The following Microsoft example code contains the following: <Grid> ... <Border Name="Content" ... > ... </Border> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsExpanded" Value="True"> <Setter TargetName="ContentRow" Property="Height" Value="{Binding ElementName=Content,Path=DesiredHeight}" /> </Trigger> ... </ControlTemplate.Triggers> When run, however, this code generates the following databinding error: System.Windows.Data Error: 39 : BindingExpression path error: 'DesiredHeight' property not found on 'object' ''Border' (Name='Content')'. BindingExpression:Path=DesiredHeight; DataItem='Border' (Name='Content'); target element is 'RowDefinition' (HashCode=2034711); target property is 'Height' (type 'GridLength') Despite this error, the code works correctly. I have looked through the documentation and DesiredHeight does not appear to be a member of Border. Can anyone explain where DesiredHeight is coming from? Also, is there any way to resolve/suppress this error so my program output is clean?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15  | Next Page >