Search Results

Search found 25946 results on 1038 pages for 'cost based optimizer'.

Page 666/1038 | < Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >

  • How do I put a custom version of node/add form in a view using customfield in drupal 6?

    - by niedakh
    Hi, I'm trying to add a pre-filled 'add reply' form to a view of nodes. Reply is a content-type (reply) with certain fields that need to be prefilled based on what is in the view. This way a user can see only the selected fields from the node/add/reply. At the moment I'm building the forms manually - copy the form from node/add, do some modifications using php & views customfield, but I would like to be able to just push default values to some fields and hide some others and then make drupal render it with all the javascript glory like date select etc. Can this be done?

    Read the article

  • What response Adobe Acrobat/Reader gets back after submitting form to a PHP script

    - by Laszlo
    Hi, PDF experts help needed, I am posting a form from PDF to a PHP script with Adobe Acrobat. I would like to set my PDF to display appropiate messages based upon some returned values. So I am looking for returned values... if there is any returned values after posting the form, how can I access them? Maybe, there is an option to set this in Adobe? Another thing: When the PDF form gets submitted and let's say I echo back a 'thank you' message from my form-handler PHP script, a new PDF gets created and opened displaying my message... is there a way to open that new message in the same window and close the form instead? Thanks, Laz

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • C#/.NET Little Wonders: ConcurrentBag and BlockingCollection

    - by James Michael Hare
    In the first week of concurrent collections, began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  The last post discussed the ConcurrentDictionary<T> .  Finally this week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see C#/.NET Little Wonders: A Redux. Recap As you'll recall from the previous posts, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  With .NET 4.0, a new breed of collections was born in the System.Collections.Concurrent namespace.  Of these, the final concurrent collection we will examine is the ConcurrentBag and a very useful wrapper class called the BlockingCollection. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this informative whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentBag<T> – Thread-safe unordered collection. Unlike the other concurrent collections, the ConcurrentBag<T> has no non-concurrent counterpart in the .NET collections libraries.  Items can be added and removed from a bag just like any other collection, but unlike the other collections, the items are not maintained in any order.  This makes the bag handy for those cases when all you care about is that the data be consumed eventually, without regard for order of consumption or even fairness – that is, it’s possible new items could be consumed before older items given the right circumstances for a period of time. So why would you ever want a container that can be unfair?  Well, to look at it another way, you can use a ConcurrentQueue and get the fairness, but it comes at a cost in that the ordering rules and synchronization required to maintain that ordering can affect scalability a bit.  Thus sometimes the bag is great when you want the fastest way to get the next item to process, and don’t care what item it is or how long its been waiting. The way that the ConcurrentBag works is to take advantage of the new ThreadLocal<T> type (new in System.Threading for .NET 4.0) so that each thread using the bag has a list local to just that thread.  This means that adding or removing to a thread-local list requires very low synchronization.  The problem comes in where a thread goes to consume an item but it’s local list is empty.  In this case the bag performs “work-stealing” where it will rob an item from another thread that has items in its list.  This requires a higher level of synchronization which adds a bit of overhead to the take operation. So, as you can imagine, this makes the ConcurrentBag good for situations where each thread both produces and consumes items from the bag, but it would be less-than-idea in situations where some threads are dedicated producers and the other threads are dedicated consumers because the work-stealing synchronization would outweigh the thread-local optimization for a thread taking its own items. Like the other concurrent collections, there are some curiosities to keep in mind: IsEmpty(), Count, ToArray(), and GetEnumerator() lock collection Each of these needs to take a snapshot of whole bag to determine if empty, thus they tend to be more expensive and cause Add() and Take() operations to block. ToArray() and GetEnumerator() are static snapshots Because it is based on a snapshot, will not show subsequent updates after snapshot. Add() is lightweight Since adding to the thread-local list, there is very little overhead on Add. TryTake() is lightweight if items in thread-local list As long as items are in the thread-local list, TryTake() is very lightweight, much more so than ConcurrentStack() and ConcurrentQueue(), however if the local thread list is empty, it must steal work from another thread, which is more expensive. Remember, a bag is not ideal for all situations, it is mainly ideal for situations where a process consumes an item and either decomposes it into more items to be processed, or handles the item partially and places it back to be processed again until some point when it will complete.  The main point is that the bag works best when each thread both takes and adds items. For example, we could create a totally contrived example where perhaps we want to see the largest power of a number before it crosses a certain threshold.  Yes, obviously we could easily do this with a log function, but bare with me while I use this contrived example for simplicity. So let’s say we have a work function that will take a Tuple out of a bag, this Tuple will contain two ints.  The first int is the original number, and the second int is the last multiple of that number.  So we could load our bag with the initial values (let’s say we want to know the last multiple of each of 2, 3, 5, and 7 under 100. 1: var bag = new ConcurrentBag<Tuple<int, int>> 2: { 3: Tuple.Create(2, 1), 4: Tuple.Create(3, 1), 5: Tuple.Create(5, 1), 6: Tuple.Create(7, 1) 7: }; Then we can create a method that given the bag, will take out an item, apply the multiplier again, 1: public static void FindHighestPowerUnder(ConcurrentBag<Tuple<int,int>> bag, int threshold) 2: { 3: Tuple<int,int> pair; 4:  5: // while there are items to take, this will prefer local first, then steal if no local 6: while (bag.TryTake(out pair)) 7: { 8: // look at next power 9: var result = Math.Pow(pair.Item1, pair.Item2 + 1); 10:  11: if (result < threshold) 12: { 13: // if smaller than threshold bump power by 1 14: bag.Add(Tuple.Create(pair.Item1, pair.Item2 + 1)); 15: } 16: else 17: { 18: // otherwise, we're done 19: Console.WriteLine("Highest power of {0} under {3} is {0}^{1} = {2}.", 20: pair.Item1, pair.Item2, Math.Pow(pair.Item1, pair.Item2), threshold); 21: } 22: } 23: } Now that we have this, we can load up this method as an Action into our Tasks and run it: 1: // create array of tasks, start all, wait for all 2: var tasks = new[] 3: { 4: new Task(() => FindHighestPowerUnder(bag, 100)), 5: new Task(() => FindHighestPowerUnder(bag, 100)), 6: }; 7:  8: Array.ForEach(tasks, t => t.Start()); 9:  10: Task.WaitAll(tasks); Totally contrived, I know, but keep in mind the main point!  When you have a thread or task that operates on an item, and then puts it back for further consumption – or decomposes an item into further sub-items to be processed – you should consider a ConcurrentBag as the thread-local lists will allow for quick processing.  However, if you need ordering or if your processes are dedicated producers or consumers, this collection is not ideal.  As with anything, you should performance test as your mileage will vary depending on your situation! BlockingCollection<T> – A producers & consumers pattern collection The BlockingCollection<T> can be treated like a collection in its own right, but in reality it adds a producers and consumers paradigm to any collection that implements the interface IProducerConsumerCollection<T>.  If you don’t specify one at the time of construction, it will use a ConcurrentQueue<T> as its underlying store. If you don’t want to use the ConcurrentQueue, the ConcurrentStack and ConcurrentBag also implement the interface (though ConcurrentDictionary does not).  In addition, you are of course free to create your own implementation of the interface. So, for those who don’t remember the producers and consumers classical computer-science problem, the gist of it is that you have one (or more) processes that are creating items (producers) and one (or more) processes that are consuming these items (consumers).  Now, the crux of the problem is that there is a bin (queue) where the produced items are placed, and typically that bin has a limited size.  Thus if a producer creates an item, but there is no space to store it, it must wait until an item is consumed.  Also if a consumer goes to consume an item and none exists, it must wait until an item is produced. The BlockingCollection makes it trivial to implement any standard producers/consumers process set by providing that “bin” where the items can be produced into and consumed from with the appropriate blocking operations.  In addition, you can specify whether the bin should have a limited size or can be (theoretically) unbounded, and you can specify timeouts on the blocking operations. As far as your choice of “bin”, for the most part the ConcurrentQueue is the right choice because it is fairly light and maximizes fairness by ordering items so that they are consumed in the same order they are produced.  You can use the concurrent bag or stack, of course, but your ordering would be random-ish in the case of the former and LIFO in the case of the latter. So let’s look at some of the methods of note in BlockingCollection: BoundedCapacity returns capacity of the “bin” If the bin is unbounded, the capacity is int.MaxValue. Count returns an internally-kept count of items This makes it O(1), but if you modify underlying collection directly (not recommended) it is unreliable. CompleteAdding() is used to cut off further adds. This sets IsAddingCompleted and begins to wind down consumers once empty. IsAddingCompleted is true when producers are “done”. Once you are done producing, should complete the add process to alert consumers. IsCompleted is true when producers are “done” and “bin” is empty. Once you mark the producers done, and all items removed, this will be true. Add() is a blocking add to collection. If bin is full, will wait till space frees up Take() is a blocking remove from collection. If bin is empty, will wait until item is produced or adding is completed. GetConsumingEnumerable() is used to iterate and consume items. Unlike the standard enumerator, this one consumes the items instead of iteration. TryAdd() attempts add but does not block completely If adding would block, returns false instead, can specify TimeSpan to wait before stopping. TryTake() attempts to take but does not block completely Like TryAdd(), if taking would block, returns false instead, can specify TimeSpan to wait. Note the use of CompleteAdding() to signal the BlockingCollection that nothing else should be added.  This means that any attempts to TryAdd() or Add() after marked completed will throw an InvalidOperationException.  In addition, once adding is complete you can still continue to TryTake() and Take() until the bin is empty, and then Take() will throw the InvalidOperationException and TryTake() will return false. So let’s create a simple program to try this out.  Let’s say that you have one process that will be producing items, but a slower consumer process that handles them.  This gives us a chance to peek inside what happens when the bin is bounded (by default, the bin is NOT bounded). 1: var bin = new BlockingCollection<int>(5); Now, we create a method to produce items: 1: public static void ProduceItems(BlockingCollection<int> bin, int numToProduce) 2: { 3: for (int i = 0; i < numToProduce; i++) 4: { 5: // try for 10 ms to add an item 6: while (!bin.TryAdd(i, TimeSpan.FromMilliseconds(10))) 7: { 8: Console.WriteLine("Bin is full, retrying..."); 9: } 10: } 11:  12: // once done producing, call CompleteAdding() 13: Console.WriteLine("Adding is completed."); 14: bin.CompleteAdding(); 15: } And one to consume them: 1: public static void ConsumeItems(BlockingCollection<int> bin) 2: { 3: // This will only be true if CompleteAdding() was called AND the bin is empty. 4: while (!bin.IsCompleted) 5: { 6: int item; 7:  8: if (!bin.TryTake(out item, TimeSpan.FromMilliseconds(10))) 9: { 10: Console.WriteLine("Bin is empty, retrying..."); 11: } 12: else 13: { 14: Console.WriteLine("Consuming item {0}.", item); 15: Thread.Sleep(TimeSpan.FromMilliseconds(20)); 16: } 17: } 18: } Then we can fire them off: 1: // create one producer and two consumers 2: var tasks = new[] 3: { 4: new Task(() => ProduceItems(bin, 20)), 5: new Task(() => ConsumeItems(bin)), 6: new Task(() => ConsumeItems(bin)), 7: }; 8:  9: Array.ForEach(tasks, t => t.Start()); 10:  11: Task.WaitAll(tasks); Notice that the producer is faster than the consumer, thus it should be hitting a full bin often and displaying the message after it times out on TryAdd(). 1: Consuming item 0. 2: Consuming item 1. 3: Bin is full, retrying... 4: Bin is full, retrying... 5: Consuming item 3. 6: Consuming item 2. 7: Bin is full, retrying... 8: Consuming item 4. 9: Consuming item 5. 10: Bin is full, retrying... 11: Consuming item 6. 12: Consuming item 7. 13: Bin is full, retrying... 14: Consuming item 8. 15: Consuming item 9. 16: Bin is full, retrying... 17: Consuming item 10. 18: Consuming item 11. 19: Bin is full, retrying... 20: Consuming item 12. 21: Consuming item 13. 22: Bin is full, retrying... 23: Bin is full, retrying... 24: Consuming item 14. 25: Adding is completed. 26: Consuming item 15. 27: Consuming item 16. 28: Consuming item 17. 29: Consuming item 19. 30: Consuming item 18. Also notice that once CompleteAdding() is called and the bin is empty, the IsCompleted property returns true, and the consumers will exit. Summary The ConcurrentBag is an interesting collection that can be used to optimize concurrency scenarios where tasks or threads both produce and consume items.  In this way, it will choose to consume its own work if available, and then steal if not.  However, in situations where you want fair consumption or ordering, or in situations where the producers and consumers are distinct processes, the bag is not optimal. The BlockingCollection is a great wrapper around all of the concurrent queue, stack, and bag that allows you to add producer and consumer semantics easily including waiting when the bin is full or empty. That’s the end of my dive into the concurrent collections.  I’d also strongly recommend, once again, you read this excellent Microsoft white paper that goes into much greater detail on the efficiencies you can gain using these collections judiciously (here). Tweet Technorati Tags: C#,.NET,Concurrent Collections,Little Wonders

    Read the article

  • Accessing a file in Windows and Linux when working with JAVA+Eclipse.

    - by Gabriel A. Zorrilla
    Hi there. I'm doing some JAVA coding at home and at work. At home i have Linux, work, Windows. The rootpath to X file in Windows is c:\Documents And Settings\User\My Documents\Dropbox\file.xxx and in Linux is something like /media/My Documents/Dropbox/file.xxx So, every time i edit in either system, i have to manually change the root of the file in a new File(FILEPATH) statement. Is there a workaround for this? I bet if the file root is relative to the project resource tree would do the trick, but that's an Eclipse based solution, not JAVA, i believe.

    Read the article

  • Using URL Routing for Web Forms with .ashx files

    - by RandomBen
    I am developing a .NET 3.5 Web Forms based website that uses URL Routing. So far I have created a few routes and I have had no issue. I now have a .ashx file that is going to handle sending .pdf files from a table in SQL Server to the website when someone clicks on a link. Normally when I create a Handler it would look like this: return BuildManager.CreateInstanceFromVirtualPath("~/ViewItem.aspx", typeof(Page)) as Page; For my .ashx file I tried: return BuildManager.CreateInstanceFromVirtualPath("~/FileServer.ashx", typeof(Page)) as Page; This doesn't work though because fileserver.ashx is not a page so casting it as typeof(Page)) as Page is going to fail. What do I cast the VirtualPath as instead of Page or is there some other way I should be doing this.

    Read the article

  • Django app that can provide user friendly, multiple / mass file upload functionality to other apps

    - by hopla
    Hi, I'm going to be honest: this is a question I asked on the Django-Users mailinglist last week. Since I didn't get any replies there yet, I'm reposting it on Stack Overflow in the hope that it gets more attention here. I want to create an app that makes it easy to do user friendly, multiple / mass file upload in your own apps. With user friendly I mean upload like Gmail, Flickr, ... where the user can select multiple files at once in the browse file dialog. The files are then uploaded sequentially or in parallel and a nice overview of the selected files is shown on the page with a progress bar next to them. A 'Cancel' upload button is also a possible option. All that niceness is usually solved by using a Flash object. Complete solutions are out there for the client side, like: SWFUpload http://swfupload.org/ , FancyUpload http://digitarald.de/project/fancyupload/ , YUI 2 Uploader http://developer.yahoo.com/yui/uploader/ and probably many more. Ofcourse the trick is getting those solutions integrated in your project. Especially in a framework like Django, double so if you want it to be reusable. So, I have a few ideas, but I'm neither an expert on Django nor on Flash based upload solutions. I'll share my ideas here in the hope of getting some feedback from more knowledgeable and experienced people. (Or even just some 'I want this too!' replies :) ) You will notice that I make a few assumptions: this is to keep the (initial) scope of the application under control. These assumptions are of course debatable: All right, my idea's so far: If you want to mass upload multiple files, you are going to have a model to contain each file in. I.e. the model will contain one FileField or one ImageField. Models with multiple (but ofcourse finite) amount of FileFields/ ImageFields are not in need of easy mass uploading imho: if you have a model with 100 FileFields you are doing something wrong :) Examples where you would want my envisioned kind of mass upload: An app that has just one model 'Brochure' with a file field, a title field (dynamically created from the filename) and a date_added field. A photo gallery app with models 'Gallery' and 'Photo'. You pick a Gallery to add pictures to, upload the pictures and new Photo objects are created and foreign keys set to the chosen Gallery. It would be nice to be able to configure or extend the app for your favorite Flash upload solution. We can pick one of the three above as a default, but implement the app so that people can easily add additional implementations (kinda like Django can use multiple databases). Let it be agnostic to any particular client side solution. If we need to pick one to start with, maybe pick the one with the smallest footprint? (smallest download of client side stuff) The Flash based solutions asynchronously (and either sequentially or in parallel) POST the files to a url. I suggest that url to be local to our generic app (so it's the same for every app where you use our app in). That url will go to a view provided by our generic app. The view will do the following: create a new model instance, add the file, OPTIONALLY DO EXTRA STUFF and save the instance. DO EXTRA STUFF is code that the app that uses our app wants to run. It doesn't have to provide any extra code, if the model has just a FileField/ImageField the standard view code will do the job. But most app will want to do extra stuff I think, like filling in the other fields: title, date_added, foreignkeys, manytomany, ... I have not yet thought about a mechanism for DO EXTRA STUFF. Just wrapping the generic app view came to mind, but that is not developer friendly, since you would have to write your own url pattern and your own view. Then you have to tell the Flash solutions to use a new url etc... I think something like signals could be used here? Forms/Admin: I'm still very sketchy on how all this could best be integrated in the Admin or generic Django forms/widgets/... (and this is were my lack of Django experience shows): In the case of the Gallery/Photo app: You could provide a mass Photo upload widget on the Gallery detail form. But what if the Gallery instance is not saved yet? The file upload view won't be able to set the foreignkeys on the Photo instances. I see that the auth app, when you create a user, first asks for username and password and only then provides you with a bigger form to fill in emailadres, pick roles etc. We could do something like that. In the case of an app with just one model: How do you provide a form in the Django admin to do your mass upload? You can't do it with the detail form of your model, that's just for one model instance. There's probably dozens more questions that need to be answered before I can even start on this app. So please tell me what you think! Give me input! What do you like? What not? What would you do different? Is this idea solid? Where is it not? Thank you!

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • Detect Installed Application URI Handler on Webkit browsers

    - by Punit Raizada
    Guys, I have a question mainly related to the Iphone web browser but I am hoping the same solution would work on other browsers that are webkit based. I have a application (Iphone + Android) that registers a handler for custom URI (appuri://) on the Phone. I am able to launch the application by making a link to "appuri://act/launch" from my web pages. This works only if my application is installed on the device. If the device does not have the app installed then a message comes up "Safari was not able to open ....". What I want to do is detect if the URI Scheme is supported from the browser and then prompt my own message saying "please download the app ..blah blah blah" if the handler for the URI scheme is not found. Is there a way I can detect or find the list of URL Scheme handlers on the Phone from the Web Browser ?

    Read the article

  • Using XML to generate SAP ABAP and/or SAPScript?

    - by Rob
    Has anyone got examples and/or experience of generating SAP ABAP or SAPScript form code from XML that came from an external application? This would help: creation of SAP-based applications in a data-driven way by automating the knowledge to do so from the export of XML from an external application automated inputting of knowledge from an external application into SAP applications, rather than manually copying between systems enable 3rd-party external tools to be used to create data, perhaps in a more easier-to-use way than could be done in SAP. Or if there was already heavy investment in training with these third party tools rather than SAP, or if the employment market favoured staff with knowledge of these tools enable creation of data for multiple purposes, views: those in SAP and outside SAP. enable inter-operability of SAP with 3rd-party external tools I'm looking for: - experiences as to the feasibility - tools, e.g. parsers, XSLT etc. - examples

    Read the article

  • Codeigniter and Paypal: How it works

    - by Abs
    Hello all, Two random question as I try to integerate Paypal IPN into my Codeigniter based web app. 1) Are these two lines the same? $data['pp_info'] = $this->input->post(); $data['pp_info'] = $_POST; 2) A user agrees to pay a monthly recurring fee to use your service using paypal - first payment you are aware they have paid as you get data returned from paypal. But how do you keep track if users has paid for the following months? How do you know the user has not cancelled from their paypal account? Thanks all for any help

    Read the article

  • setSearchDisplayController considered private-API?

    - by Adun
    Hi, I recently submitted an app for app review but I got rejected because of the use of a private API. I'm still a bit new to iPhone developing so I was wondering if someone could help me understand how this part was rejected: UISearchBar *searchBar = [[[UISearchBar alloc] initWithFrame:CGRectMake(0, 0, self.tableView.frame.size.width, 0)] autorelease]; searchBar.showsCancelButton = NO; searchBar.placeholder = @"Search Exhibitors"; [searchBar sizeToFit]; [self.tableView setTableHeaderView:searchBar]; UISearchDisplayController *searchDisplayController = [[UISearchDisplayController alloc] initWithSearchBar:searchBar contentsController:self]; [self performSelector:@selector(setSearchDisplayController:) withObject:searchDisplayController]; [searchDisplayController setDelegate:self]; [searchDisplayController setSearchResultsDataSource:self]; [searchDisplayController setSearchResultsDelegate:self]; [searchDisplayController release]; The part that they mentioned was the "setSearchDisplayController". I based the searching of a UITableView on the example given here. So can anyone explain why this is considered a private API?

    Read the article

  • How to use Maven2 with Apache Sling?

    - by Jerome Baum
    Google doesn't help, neither does a stackoverflow.com search, and browsing through Apache Sling documentation doesn't show any solution so here goes: How can I best use Maven2 with Apache Sling? Basically as far as I see the entire application code will need to be placed in the JCR repository, but I need it versioned. There is some documentation about OSGi bundles that could contain resources, but isn't it kind of overkill to create a bundle with a bunch of Java code to talk to Sling when all I want to do is load some files to corresponding URLs? What I was hoping for is something like jetty:run that will automatically deploy a WAR into a new Jetty instance. Ideally the goal for Sling would run Sling and then load the specified content -- is there some Maven2 plugin for this right now? Or is the bundle-based solution the "canonical" way of doing this?

    Read the article

  • Embedded BI for ASP.NET

    - by Michael Shimmins
    Can anyone recommend a decent business intelligence & reporting application that can be integrated into an OEM application? Primary requirements are: Administrators can define cubes/dimensions etc (or we - the OEM - can predefine some) Report designers can easily inspect data by visually selecting dimensions, filters etc in an adhoc way that is quickly output with little upfront investment Report designers can design a report based on dimensions/filters and save their definition to be run as required Report viewers can view the reports defined by the report designers All of this plus in .NET that can be branded/integrated into our existing web application's look and feel. Permissions need to work off our user/groups system. I've found a few that look good, but they are all in Java and I don't want to ask our clients to install ASP.NET for the app, and then Java, tomcat etc just for reporting. Thanks

    Read the article

  • Having trouble with match wildcards in array

    - by Senthil
    I've a master text file and exceptions file and I want to match all the words in exception file with master file and increase counter, but the trick is with wildcards. I'm able to do without wildcards with this: words = %w(aaa aab acc ccc AAA) stop = %q(/aa./) words.each do |x| if x.match(/aa./) puts "yes for #{x}" else puts "no for #{x}" end end = yes for aaa yes for aab no for acc no for ccc yes for AAA Also which would be the best way to go about this, using arrays or some other way. Edit: Sorry for the confusion. Yes stop has multiple wildcards and I want to match all words based on those wildcards. words = %w(aaa aab acc ccc AAA) stop = %q(aa* ac* ab*) Thanks

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Hylton
    Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • Show a window from 32-bit NPAPI Plugin in 64-bit Safari

    - by Glenn Howes
    I have an old NPAPI plugin for OS X that I'm trying to refit for use with Snow Leopard's version of Safari. My problem is that when I switch Safari to 64-bit mode, it changes the plugin environment to out of process mode (where plugins are hosted by a 32-bit WebKitPluginHost process). And now my toolbar palettes are not visible on screen, even though the NSPanels on which they are based think they are visible. The documentation says that bringing up windows is not recommended, but doesn't say its prohibited; is there something I can do to bring up my Windows?

    Read the article

  • What components of your site do you typically "offload" or embed?

    - by Chad
    Here's what I mean. In developing my ASP.NET MVC based site, I've managed to offload a great deal of the static file hosting and even some of the "work". Like so: jQuery for my javascript framework. Instead of hosting it on my site, I use the Google CDN Google maps, obviously "offloaded" - no real work being performed on my server - Google hosted jQueryUI framework - Google CDN jQueryUI CSS framework - Google CDN jQueryUI CSS framework themes - Google CDN So what I'm asking is this, other than what I've got listed... What aspects of your sites have you been able to offload, or embed, from outside services? Couple others that come to mind... OpenAuth - take much of the authentication process work off your site Google Wave - when it comes out, take communication work off of your site

    Read the article

  • Covariance and Contravariance on the same type argument

    - by William Edmondson
    The C# spec states that an argument type cannot be both covariant and contravariant at the same time. This is apparent when creating a covariant or contravariant interface you decorate your type parameters with "out" or "in" respectively. There is not option that allows both at the same time ("outin"). Is this limitation simply a language specific constraint or are there deeper, more fundamental reasons based in category theory that would make you not want your type to be both covariant and contravariant? Edit: My understanding was that arrays were actually both covariant and contravariant. public class Pet{} public class Cat : Pet{} public class Siamese : Cat{} Cat[] cats = new Cat[10]; Pet[] pets = new Pet[10]; Siamese[] siameseCats = new Siamese[10]; //Cat array is covariant pets = cats; //Cat array is also contravariant since it accepts conversions from wider types cats = siameseCats;

    Read the article

  • Server Security

    - by mahatmanich
    I want to run my own root server (directly accessible from the web without a hardware firewall) with debian lenny, apache2, php5, mysql, postfix MTA, sftp (based on ssh) and maybe dns server. What measures/software would you recomend, and why, to secure this server down and minimalize the attack vector? Webapplications aside ... This is what I have so far: iptables (for gen. packet filtering) fail2ban (brute force attack defense) ssh (chang default, port disable root access) modsecurity - is really clumsy and a pain (any alternative here?) ?Sudo why should I use it? what is the advantage to normal user handling thinking about greensql for mysql www.greensql.net is tripwire worth looking at? snort? What am I missing? What is hot and what is not? Best practices? I like "KISS" - Keep it simple secure, I know it would be nice! Thanks in advance ...

    Read the article

  • Career advice for a frustrated newbie programmer.

    - by Satoru.Logic
    Hi, all. This is my first year as a programmer. The programming language that I'm most familiar with is Python. I am maintaining a web2py based legacy system, alone, which means I have to grow up as an all-rounder. The original programmers on this project are nowhere to find, and there are hardly any usable documents about requirements or anything. I find it very frustrating every time I hear the users criticize the system, and I can't help explaining that "that was not my fault, those mess was there even before I entered this company." I wonder what should I do with this situation. Please give me some advice, thanks in advance.

    Read the article

  • Cross Platform Tips, Tricks & Gotchas

    - by codeelegance
    If you've worked on a cross platform development project: What did you learn from the experience? What worked? What didn't? What problems did you run into and how did you solve them? Did you aim for consistent appearance and functionality across all platforms or try to take advantage of each platform's strengths? What language and cross platform libraries did you use and did they live up to their claims? Was it a desktop application or web-based?

    Read the article

  • Most popular classroom, bootcamp, or online training for ASP.NET 3.5

    - by Curtis White
    What are the most popular and highest quality training sources for ASP.NET 3.5. I am interested in both "boot camp" class room training and online self-paced training. I am interested in both training that can be applied to certification but also non certification based training in the following areas: ASP.NET 3.5, AJAX, and web security. The training should be geared to real world projects and not memorization. I am most interested to hear from Microsoft MVP's on the matter and those who personally have attended or scheduled such training.

    Read the article

  • Qt, unit testing and mock objects.

    - by Eye of Hell
    Hello. Qt framework has internal support for testing via QtTest package. Unfortunately, i didn't find any facilities in it that can assist in creating mock objects. Qt signals and slots offers a natural way to create a unit-testing friendly units with input (slots) and output (signals). But is it any easy way to test that calling specified slot in object will result in emitting correct signals with correct arguments? Of course i can manually create a mock objects and connect them to objects being tested, but it's a lot of code. Maybe it's some techniques exists that allows to somehow automate mock objects creation while unit-testing Qt-based applications?

    Read the article

< Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >