Search Results

Search found 58653 results on 2347 pages for 'seed data'.

Page 743/2347 | < Previous Page | 739 740 741 742 743 744 745 746 747 748 749 750  | Next Page >

  • MS Access MSChart.Graph.8 not printing

    - by Tanj
    Software: Microsoft Access 2007 SP2 Database File Version: Access 2000 I have an access program that I inherited from a previous employee. It uses forms for reports and since I don't have much experience in access I have continued to do this. I have created a copy of the program for another project and modified it to suit. I am having trouble getting more then one chart to print. All the charts display in form view, they all have the same properties (excepting data, position, etc.) For some reason they are not printing. They don't even show up in the print preview. I am thinking it must be something with the graphs themselves as they sometimes lose all information. I have to open the graphs in edit mode and change the data source from column to row and back again so that it gets redrawn. (Refresh doesn't fix it) So right now I don't even have a clue as to where to look so ideas are welcome. Edit #1 It seems to be a problem with linking to an unbound form. Subform Field Linker: Can't build a link between unbound forms. The query for the main form is SELECT tTest.ixTest, tMotorTypes.ixMotorType, tMotorTypes.asMotorType, tMotorTypes.fDeprecated, tTestType.asTest, tTest.asSerialNum, tTest.asOrderNum, tTest.asFrameNum, tTest.asRotorNum, tTest.asOperator, tTest.iStation, tTest.dtTestDate, tTest.ixTestType FROM tMotorTypes INNER JOIN (tTestType INNER JOIN tTest ON tTestType.ixTestType=tTest.ixTestType) ON tMotorTypes.ixMotorType=tTest.ixMotorType; The query for the chart is: SELECT qGraphRSTTemperatures.Frequency, qGraphRSTTemperatures.[Drive End], qGraphRSTTemperatures.[Non Drive End], qGraphRSTTemperatures.[Air In], qGraphRSTTemperatures.Core FROM qGraphRSTTemperatures ORDER BY qGraphRSTTemperatures.ixTemperature; Query qGraphRSTTemperatures: SELECT tElectricalData.dblFrequency AS Frequency, tTemperatures.dblDrvEnd AS [Drive End], tTemperatures.dblNonDrvEnd AS [Non Drive End], tTemperatures.dblAirIn AS [Air In], tTemperatures.dblCore AS Core, tSubTest.ixTest, tTemperatures.ixTemperature FROM (tSubTest INNER JOIN tElectricalData ON tSubTest.ixSubTest = tElectricalData.ixSubTest) LEFT JOIN tTemperatures ON tElectricalData.ixElectrical = tTemperatures.ixElectrical WHERE (((tSubTest.ixSubTestType)=5)) ORDER BY tSubTest.ixTest, tTemperatures.ixTemperature; So how come, in the form view it shows the graph with the correct data when linked thus: Child field: ixTest Master field: ixTest but won't print the graph. The graph will print if I remove the links, but then I have all the data from chart query as it is not limited by ixTest. edit #2 It seems to be a data retrieval/rendering issue in printing. Is there anything in printing that changes the context of records with respect to parent/child relationships?

    Read the article

  • Set default owner/user

    - by Daniel Hollands
    I'm a web developer, and so have set-up an old machine in the office as an Ubuntu Server, for the purposes of testing websites. I've set-up LAMP and have created a /var/www folder, from which all my local sites are served. The issue is that of user permissions, i.e. any files that I copy into that folder (from my Windows machine via the network) automatically take on me (daniel) as their owner. The problem is that I want www-data to become the owner. I did some research and saw that it should be possible to use setuid (and setgid) to automatically set www-data as the owner of all files put into /var/www automatically, so far I've not had any luck making it work. Can someone help please? Thank you UPDATE: Would this do what I want it to do? Default file permissions for php user www-data UPDATE 2: I've kinda fixed my issue by changing my samba settings. Using Webmin, I was able to go in and change the default settings (as seen here: http://imageshack.us/photo/my-images/521/captureon.png/)

    Read the article

  • PHP/MySQL Database application development tool

    - by RCH
    I am an amateur PHP coder, and have built a couple of dozen projects from scratch (including fairly simple e-commerce systems with user authentication, PayPal integration etc - all coded by hand from a clean page. Have also done a price comparison engine that takes data from multiple sites etc.). But I am no expert with OO and other such advanced techniques - I just have a fairly decent grasp of the basics of data processing, logic, functions and trying to optimize code as much as possible. I just want to make this clear so you have some idea of where I'm coming from. I have a couple of fairly large new projects on my plate for corporate clients - both require bespoke database-driven applications with complex relationships, many tables and lots of different front-end functions to manipulate that data for the internal staff in these companies. I figured building these systems from scratch would probably be a huge waste of time. Instead, there must be tools out there that will allow me to construct MySQL databases and build the pages with things like pagination, action buttons, table construction etc. Some kind of database abstraction layer, or system generator, if you will. What tool do you recommend for such a purpose for someone at my level? Open source would be great, but I don't mind paying for something decent as well. Thanks for any advice.

    Read the article

  • Basic procedural generated content works, but how could I do the same in reverse?

    - by andrew
    My 2D world is made up of blocks. At the moment, I create a block and assign it a number between 1 and 4. The number assigned to the nth block is always the same (i.e if the player walks backwards or restarts the game.) and is generated in the function below. As shown here by this animation, the colours represent the number. function generate_data(n) math.randomseed(n) -- resets the random so that the 'random' number for n is always the same math.random() -- fixes lua random bug local no = math.random(4) --print(no, n) return no end Now I want to limit the next block's number - a block of 1 will always have a block 2 after it, while block 2 will either have a block 1,2 or 3 after it, etc. Before, all the blocks data was randomly generated, initially, and then saved. This data was then loaded and used instead of being randomly called. While working this way, I could specify what the next block would be easily and it would be saved for consistency. I have now removed this saving/loading in favour of procedural generation as I realised that save whiles would get very big after travelling. Back to the present. While travelling forward (to the right), it is easy to limit what the next blocks number will be. I can generate it at the same time as the other data. The problem is when travelling backwards (to the left) I can not think of a way to load the previous block so that it is always the same. Does anyone have any ideas on how I could sort this out?

    Read the article

  • help merging perl code routines together for file processing

    - by jdamae
    I need some perl help in putting these (2) processes/code to work together. I was able to get them working individually to test, but I need help bringing them together especially with using the loop constructs. I'm not sure if I should go with foreach..anyways the code is below. Also, any best practices would be great too as I'm learning this language. Thanks for your help. Here's the process flow I am looking for: -read a directory -look for a particular file -use the file name to strip out some key information to create a newly processed file -process the input file -create the newly processed file for each input file read (if i read in 10, I create 10 new files) Sample Recs: col1,col2,col3,col4,col5 [email protected],[email protected],8,2009-09-24 21:00:46,1 [email protected],[email protected],16,2007-08-18 22:53:12,33 [email protected],[email protected],16,2007-08-18 23:41:23,33 Here's my test code: Target Filetype: `/backups/test/foo101.name.aue-foo_p002.20110124.csv` Part 1: my $target_dir = "/backups/test/"; opendir my $dh, $target_dir or die "can't opendir $target_dir: $!"; while (defined(my $file = readdir($dh))) { next if ($file =~ /^\.+$/); #Get filename attributes if ($file =~ /^foo(\d{3})\.name\.(\w{3})-foo_p(\d{1,4})\.\d+.csv$/) { print "$1\n"; print "$2\n"; print "$3\n"; } print "$file\n"; } Part 2: use strict; use Digest::MD5 qw(md5_hex); #Create new file open (NEWFILE, ">/backups/processed/foo$1.name.$2-foo_p$3.out") || die "cannot create file"; my $data = ''; my $line1 = <>; chomp $line1; my @heading = split /,/, $line1; my ($sep1, $sep2, $eorec) = ( "^A", "^E", "^D"); while (<>) { my $digest = md5_hex($data); chomp; my (@values) = split /,/; my $extra = "__mykey__$sep1$digest$sep2" ; $extra .= "$heading[$_]$sep1$values[$_]$sep2" for (0..scalar(@values)); $data .= "$extra$eorec"; print NEWFILE "$data"; } #print $data; close (NEWFILE);

    Read the article

  • SSISDB Analysis Script on Gist

    - by Davide Mauri
    I've created two simple, yet very useful, script to extract some useful data to quickly monitor SSIS packages execution in SQL Server 2012 and after.get-ssis-execution-status  get-ssis-data-pumped-rows  I've started to use gist since it comes very handy, for this "quick'n'dirty" scripts and snippets, and you can find the above scripts and others (hopefully the number will increase over time...I plan to use gist to store all the code snippet I used to store in a dedicated folder on my machine) there.Now, back to the aforementioned scripts. The first one ("get-ssis-execution-status") returns a list of all executed and executing packages along with latest successful and running executions (so that on can have an idea of the expected run time)error messageswarning messages related to duplicate rows found in lookupsthe second one ("get-ssis-data-pumped-rows") returns information on DataFlows status. Here there's something interesting, IMHO. Nothing exceptional, let it be clear, but nonetheless useful: the script extract information on destinations and row sent to destinations right from the messages produced by the DataFlow component. This helps to quickly understand how many rows as been sent and where...without having to increase the logging level.Enjoy! PSI haven't tested it with SQL Server 2014, but AFAIK they should work without problems. Of course any feedback on this is welcome. 

    Read the article

  • Making simple forms in web applications

    - by levalex
    How do you work with forms in your web applications? I am not talking about RESTful applications, I don't want to build heavy front-end using frameworks like Backbone. For example, I need to add "contact us" form. I need to check data which was filled by user and tell him that his data was sent. Requirements: I want to use AJAX. I want to validate form on back-end side and don't want to duplicate the same code on front-end side. I have my own solution, but it doesn't satisfy me. I make an AJAX request with serialized data on form submit and get response. The next is checking "Content-type" header. html - It means that errors with filling form are exists and response html is form with error labels. - I will replace my form with response html. json and response.error_code == 0 - It means that form was successfully submited. - I will show user notification about success. json and response.error_code != 0 - Something was broken on back-end (like connection with database). other - I display the following message : We have been notified and have started to work with that problem. Please, try it later. The problem of that way is that I can't use it with forms that upload file. What is your practise? What libraries and principles do you use?

    Read the article

  • Team Foundation Service Preview now open for all!

    - by Tarun Arora
    The concept of TFS in the cloud was first presented back in early 2010, the product team worked hard to preview a constantly evolving solution at the BUILD conference last year and after having completed 31 Sprints today the preview service has been opened for all. No more invitation codes required, TfsPreview has been made public! “Since we announced the Team Foundation Service Preview at the BUILD conference last year, we’ve limited the on boarding of new customers by requiring invitation codes to create accounts.  The main reason for this has been to control the growth of the service to make sure it didn’t run away from us and end up with a bad user experience.  In this time period, we’ve continued to work on our infrastructure, performance, scale, monitoring, management and, of course, some cool new features like cloud build. ”   - Brian Harry Since the service is still in preview, it is free for all… If you haven’t, now is the best time to try out the offering. There is no fixed time line on how long before service becomes chargeable but the terms of service support production use, the service is reliable and the product team committed to carry all of your data forward into production. “The service will remain in “preview” for a while longer while we work through additional features like data portability, commercial terms, etc but the terms of service support production use, the service is reliable and we expect to carry all of your data forward into production. ”  - Brian Harry As of today it’s possible to use TFS Preview with VS 2012 RC, VS 2010 SP1, VS 2008 SP1, the service currently does not work with VS 2005, this is something the product team is actively working on. You can refer to Brian’s announcement blog post here, http://blogs.msdn.com/b/bharry/archive/2012/06/11/team-foundation-service-preview-is-public.aspx

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • How do I convince my boss that it's OK to use an application to access an outside website?

    - by Cyberherbalist
    That is, if you agree that it's OK. We have a need to maintain an accurate internal record of bank routing numbers, and my boss wants me to set up a process where once a week someone goes to the Federal Reserve's website, clicks on the link to get the list of routing numbers (or the link giving the updates since a particular date), and then manually uploads the resultant text file to an application that will make the update to our data. I told him that a manual process was not at all necessary, and that I could write a routine that would access the FED's routing numbers in the application that keeps our data updated, and put it on whatever schedule was appropriate. But he is greatly opposed to doing this, and calls it "hacking the Federal Reserve website." I think he's afraid that the FED is going to get after us. I showed him the FED's robot.txt file, and the only thing it forbids is an automated indexing of pages with extension .cf*: User-agent: * # applies to all robots Disallow: CF # disallow indexing of all CF* directories and pages This says nothing about accessing the same data automatically that you could access manually. Anyone have a good counterargument to the idea that we'd be "hacking" the FED?

    Read the article

  • Fleunt NHibernate not working outside of nunit test fixtures

    - by thorkia
    Okay, here is my problem... I created a Data Layer using the RTM Fluent Nhibernate. My create session code looks like this: _session = Fluently.Configure(). Database(SQLiteConfiguration.Standard.UsingFile("Data.s3db")) .Mappings( m => { m.FluentMappings.AddFromAssemblyOf<ProductMap>(); m.FluentMappings.AddFromAssemblyOf<ProductLogMap>(); }) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); When I reference the module in a test project, then create a test fixture that looks something like this: [Test] public void CanAddProduct() { var product = new Product {Code = "9", Name = "Test 9"}; IProductRepository repository = new ProductRepository(); repository.AddProduct(product); using (ISession session = OrmHelper.OpenSession()) { var fromDb = session.Get<Product>(product.Id); Assert.IsNotNull(fromDb); Assert.AreNotSame(fromDb, product); Assert.AreEqual(fromDb.Id, product.Id); } My tests pass. When I open up the created SQLite DB, the new Product with Code 9 is in it. the tables for Product and ProductLog are there. Now, when I create a new console application, and reference the same library, do something like this: Product product = new Product() {Code = "10", Name = "Hello"}; IProductRepository repository = new ProductRepository(); repository.AddProduct(product); Console.WriteLine(product.Id); Console.ReadLine(); It doesn't work. I actually get pretty nasty exception chain. To save you lots of head aches, here is the summary: Top Level exception: An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail.\r\n\r\n The PotentialReasons collection is empty The Inner exception: The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly. Both the unit test library and the console application reference the exact same version of System.Data.SQLite. Both projects have the exact same DLLs in the debug folder. I even tried copying SQLite DB the unit test library created into the debug directory of the console app, and removed the build schema lines and it still fails If anyone can help me figure out why this won't work outside of my unit tests it would be greatly appreciated. This crazy bug has me at a stand still.

    Read the article

  • C++ Iterator lifetime and detecting invalidation

    - by DK.
    Based on what's considered idiomatic in C++11: should an iterator into a custom container survive the container itself being destroyed? should it be possible to detect when an iterator becomes invalidated? are the above conditional on "debug builds" in practice? Details: I've recently been brushing up on my C++ and learning my way around C++11. As part of that, I've been writing an idiomatic wrapper around the uriparser library. Part of this is wrapping the linked list representation of parsed path components. I'm looking for advice on what's idiomatic for containers. One thing that worries me, coming most recently from garbage-collected languages, is ensuring that random objects don't just go disappearing on users if they make a mistake regarding lifetimes. To account for this, both the PathList container and its iterators keep a shared_ptr to the actual internal state object. This ensures that as long as anything pointing into that data exists, so does the data. However, looking at the STL (and lots of searching), it doesn't look like C++ containers guarantee this. I have this horrible suspicion that the expectation is to just let containers be destroyed, invalidating any iterators along with it. std::vector certainly seems to let iterators get invalidated and still (incorrectly) function. What I want to know is: what is expected from "good"/idiomatic C++11 code? Given the shiny new smart pointers, it seems kind of strange that STL allows you to easily blow your legs off by accidentally leaking an iterator. Is using shared_ptr to the backing data an unnecessary inefficiency, a good idea for debugging or something expected that STL just doesn't do? (I'm hoping that grounding this to "idiomatic C++11" avoids charges of subjectivity...)

    Read the article

  • Speeding up templates in GAE-Py by aggregating RPC calls

    - by Sudhir Jonathan
    Here's my problem: class City(Model): name = StringProperty() class Author(Model): name = StringProperty() city = ReferenceProperty(City) class Post(Model): author = ReferenceProperty(Author) content = StringProperty() The code isn't important... its this django template: {% for post in posts %} <div>{{post.content}}</div> <div>by {{post.author.name}} from {{post.author.city.name}}</div> {% endfor %} Now lets say I get the first 100 posts using Post.all().fetch(limit=100), and pass this list to the template - what happens? It makes 200 more datastore gets - 100 to get each author, 100 to get each author's city. This is perfectly understandable, actually, since the post only has a reference to the author, and the author only has a reference to the city. The __get__ accessor on the post.author and author.city objects transparently do a get and pull the data back (See this question). Some ways around this are Use Post.author.get_value_for_datastore(post) to collect the author keys (see the link above), and then do a batch get to get them all - the trouble here is that we need to re-construct a template data object... something which needs extra code and maintenance for each model and handler. Write an accessor, say cached_author, that checks memcache for the author first and returns that - the problem here is that post.cached_author is going to be called 100 times, which could probably mean 100 memcache calls. Hold a static key to object map (and refresh it maybe once in five minutes) if the data doesn't have to be very up to date. The cached_author accessor can then just refer to this map. All these ideas need extra code and maintenance, and they're not very transparent. What if we could do @prefetch def render_template(path, data) template.render(path, data) Turns out we can... hooks and Guido's instrumentation module both prove it. If the @prefetch method wraps a template render by capturing which keys are requested we can (atleast to one level of depth) capture which keys are being requested, return mock objects, and do a batch get on them. This could be repeated for all depth levels, till no new keys are being requested. The final render could intercept the gets and return the objects from a map. This would change a total of 200 gets into 3, transparently and without any extra code. Not to mention greatly cut down the need for memcache and help in situations where memcache can't be used. Trouble is I don't know how to do it (yet). Before I start trying, has anyone else done this? Or does anyone want to help? Or do you see a massive flaw in the plan?

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • CodePlex Daily Summary for Saturday, November 10, 2012

    CodePlex Daily Summary for Saturday, November 10, 2012Popular ReleasesImageGlass: Version 1.5: http://i1214.photobucket.com/albums/cc483/phapsuxeko/ImageGlass/1.png v1.5.4401.3015 Thumbnail bar: Increase loading speed Thumbnail image with ratio Support personal customization: mouse up, mouse down, mouse hover, selected item... Scroll to show all items Image viewer Zoom by scroll, or selected rectangle Speed up loading Zoom to cursor point New background design and customization and others... v1.5.4430.483 Thumbnail bar: Auto move scroll bar to selected image Show / Hi...Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 002: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes: A fix for the Netflix example from Chapter 6 that was missing a service reference A fix for the ImageHelper issue (images were not being saved) - this was due to the buffer being inadequate and required streaming the writeable bitmap to a buffer first before encoding and savingmyCollections: Version 2.3.2.0: New in this version : Added TheGamesDB.net API for Games and NDS Added Support for Windows Media Center Added Support for myMovies Added Support for XBMC Added Support for Dune HD Added Support for Mede8er Added Support for WD HDTV Added Fast search options Added order by Artist/Album for music You can now create covers and background for games You can now update your ID3 tag with the info of myCollections Fixed several provider Performance improvement New Splash ...Draw: Draw 1.0: Drawing PadPdfReport: PdfReport 1.4: - Added Html Footer Template. - Added a new HeaderTemplate (HtmlHeader) to simplify creating the headers of pages and groups by using HTML. See HtmlHeader/HtmlHeaderPdfReport.cs sample for more info. - Added a new sample (HtmlCellTemplate/HtmlCellTemplatePdfReport.cs) to show how to use the HtmlField template for creating the custom cell templates. - Added transparency setting to DiagonalWatermark. See ProgressReportPdfReport for more info. - Added DynamicCompile/DynamicCompilePdfReport.cs sa...Player Framework by Microsoft: Player Framework for Windows 8 (v1.0): IMPORTANT: List of breaking changes from preview 7 Ability to move control panel or individual elements outside media player. more info... New Entertainment app theme for out of the box support for Windows 8 Entertainment app guidelines. more info... VSIX reference names shortened. Allows seeing plugin name from "Add Reference" dialog without resizing. FreeWheel SmartXML now supports new "Standard" event callback type. Other minor misc fixes and improvements ADDITIONAL DOWNLOADSSmo...WebSearch.Net: WebSearch.Net 3.1: WebSearch.Net is an open-source research platform that provides uniform data source access, data modeling, feature calculation, data mining, etc. It facilitates the experiments of web search researchers due to its high flexibility and extensibility. The platform can be used or extended by any language compatible for .Net 2 framework, from C# (recommended), VB.Net to C++ and Java. Thanks to the large coverage of knowledge in web search research, it is necessary to model the techniques and main...Umbraco CMS: Umbraco 4.10.0: NugetNuGet BlogRead the release blog post for 4.10.0. Whats newMVC support New request pipeline Many, many bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesWe have done all we can not to break backwards compatibility, but we had to do some minor breaking changes: Removed graphicHeadlineFormat config setting from umbracoSettings.config (an old relic from the 3.x days) U4-690 DynamicNode ChildrenAsList was fixed, altering it'...MySQL Tuner for Windows: 0.3: Welcome to the third beta of MySQL Tuner for Windows! This release fixes bugs in the displaying of numbers, and a crash that occurred due to the program incorrectly closing and disposing of resources, Be warned that there will be bugs in this release, so please do not use on production or critical systems. Do post details of issues found to the issue tracker, and I will endeavour to fix them, when I can. I would love to have your feedback, and if possible your support! Requirements Microso...SharePoint Manager 2013: SharePoint Manager 2013 Release ver 1.0.12.1106: SharePoint Manager 2013 Release (ver: 1.0.12.1106) is now ready for SharePoint 2013. The new version has an expanded view of the SharePoint object model and has been tested on SharePoint 2013 RTM. As a bonus, the new version is also available for SharePoint 2010 as a separate download.D3D9Client: D3D9Client R7: New release for Orbiter 2010-P1 - Added horizon/sun angle for night-lights into the configuration file (default 10deg) - Some runway lights related bugs are fixed - Added more configuration options for runway lightsFiskalizacija za developere: FiskalizacijaDev 1.2: Verzija 1.2. je, prije svega, odgovor na novu verziju Tehnicke specifikacije (v1.1.) koja je objavljena prije nekoliko dana. Pored novosti vezanih uz (sitne) izmjene u spomenutoj novoj verziji Tehnicke dokumentacije, projekt smo prošili sa nekim dodatnim feature-ima od kojih je vecina proizašla iz vaših prijedloga - hvala :) Novosti u v1.2. su: - Neusuglašenost zahtjeva (http://fiskalizacija.codeplex.com/workitem/645) - Sample projekt - iznosi se množe sa 100 (http://fiskalizacija.codeplex.c...MFCMAPI: October 2012 Release: Build: 15.0.0.1036 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeJayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.3: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.3 For detailed release notes check the release notes. TypeScript supportWrite your code in a ...MCEBuddy 2.x: MCEBuddy 2.3.7: Changelog for 2.3.7 (32bit and 64bit) 1. Improved performance of MP4 Fast and M4V Fast Profiles (no deinterlacing, removed --decomb) 2. Improved priority handling 3. Added support for Pausing and Resume conversions 4. Added support for fallback to source directory if network destination directory is unavailable 5. MCEBuddy now installs ShowAnalyzer during installation 6. Added support for long description atom in iTunesFoxyXLS: FoxyXLS Releases: Source code and samplesWindow Manager: Window Manager 1.0: First releaseProDinner - ASP.NET MVC Sample (EF4.4, N-Tier, jQuery): 8: update to ASP.net MVC Awesome 3.0 udpate to EntityFramework 4.4 update to MVC 4 added dinners grid on homepageASP.net MVC Awesome - jQuery Ajax Helpers: 3.0: added Grid helper added XML Documentation added textbox helper added Client Side API for AjaxList removed .SearchButton from AjaxList AjaxForm and Confirm helpers have been merged into the Form helper optimized html output for AjaxDropdown, AjaxList, Autocomplete works on MVC 3 and 4BlogEngine.NET: BlogEngine.NET 2.7: Cheap ASP.NET Hosting - $4.95/Month - Click Here!! Click Here for More Info Cheap ASP.NET Hosting - $4.95/Month - Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.7 instructions. If you looking for Web Application Project, ...New ProjectsAuthor-it Variables Report Plug-in: The Variable Information Plug-in is an Author-it plug-in that displays information about all the variables in a library.bobthebuilder: Bob the builderCalendarSite: This webapplicaton is a calendar and appoinement manager app.ControlIP: This project helps users make the control task in a projectDaily Math Training: Daily Math Training app is Windows 8 app for children.Diablo III: This Project will contain several Diablo III related Implementations (a Windows 8 App another API Implementation, maybe some Calculators...)gelform: Ejercicio UTNJHashWin: Calcular hash MD5 SHA1 SHA256 SHA512 de arquivos MultiTask .NET 4.0 C# Marron TFS: Marron TFSMiddle Tennessee State University Robotics Team: Home of the MTSU Raider Robotics team.myexam: examN2F Router: The Zibings, N2Framework, url mapping/router project.NZBMatrix Advanced Feed Reader: This project is simple filter for NZBMatrix website feeds to update user with new movies which falls into the predefined customizable search templates.OpenTrading: Projeto em VB .net 3.5 que utiliza Active Record para mapeamento do banco de dados diretamente na classe das entidades e NHibernate para persistência dos dadosProjeto Parque - PI: Este projeto é resultado de um trabalho que realizamos todos os semestres na faculdade. 4º BSI - Centro Universitário Senac. QuesSys: this project is about examinationsample of inheritance: this project a sample of inheritance between classes at C#SharePoint 2013 Search Query Tool: Tool to test and debug the Search REST API in SharePoint 2013. Issue GET and POST search and suggestions queries. Use also against SharePoint Online. SharePoint Metro Grid: SharePoint 2010/2013 web part that displays links in Metro Tile Grid format. Just like Windows 8. Includes 215 images in 7 different sizes. Or use your own.SQL Server Partitioned Table Framework: The SQL Server Partitioned Table Framework (PTF) consists of a set of T-SQL Procedures that ease the maintenance work associated with partitioned tables.String -By GigaPlux Inc.: String OS is a operating system that was developed using C# and is based on the cosmos project .It has a beautiful File System by GRUNT.svu cms: first cms with chat and message between user vConsole for Windows 8: Just a very simple little app to help IT folks stuck in vSphere 5.0 with Win 8 console issues until you get to the next version of vSphereWikiprep#: The Wikiprep program is completely written in C# and based on .Net 4.0 . It processes Wikipedia dumps.Windows 8 Lock Screen: Windows 8 Lock Screen is a app that works like the Windows 8 Lock Screenwsccproject: updates will be made in due time during development

    Read the article

  • Lubuntu 12.04 on Acer laptop boots to blank blue screen

    - by WGCman
    My previous question on this was closed, but I am posting it again as the solution which my son eventually found may assist other users of the forum, or someone may be able to tweak the solution to improve the performance. Having installed Kubuntu 12.04.01 from a live USB onto my desktop, I wanted to do the same on my laptop, an Acer Aspire 1362 Laptop, which has 256MB RAM (actually 512 "on the box", but a good deal can be borrowed by the graphics!). I found Kubuntu wouldn't run on so little memory but downloaded: Lubuntu-12.04-alternate-i386.iso, which I understood was light enough to go. The laptop has one internal 40GB Toshiba hard drive divided into 3 partitions: C,19GB with Windows XP, Windows program files and some data, D, 19GB mostly data, and a small 2GB partition with some Acer software, which XP can't normally “see”. I transferred most of the contents of D to a memory stick, leaving 16GB free for Lubuntu. I did not want to dump XP yet, though it is painfully slow. I installed Lubuntu from then USB stick, accepting the default answers to most of the questions. The D: partition was further partitioned into a 500MB boot partition, 10GB for Linux, 2GB Swap and 6GB for data shareable between Linux and Windows. I had no error messages during installation, rebooted, was offered the choice of Ubuntu or XP, and selected the former. After a few minutes, I get a dark blue screen announcing Lubuntu with five dots underneath which lighten in turn. Eventually the lights stopped, and whatever I try the screen remains blank apart from “Lubuntu” I tried several solutions suggested on the forum for “identical” questions but without success.

    Read the article

  • Flex AS3: ProgressBar doesn't move

    - by jolierouge
    Hey All, I am a little stuck and need some advice/help. I have a progress bar: <mx:ProgressBar id="appProgress" mode="manual" width="300" label="{appProgressMsg}" minimum="0" maximum="100"/> I have two listener functions, one sets the progress, and one sets the appProgressMsg: public function incProgress(e:TEvent):void { var p:uint = Math.floor(e.data.number / e.data.total * 100); trace("Setting Perc." + p); appProgress.setProgress(p, 100); } public function setApplicationProgressStep(e:TEvent):void { trace("Setting step:" + e.data); appProgressMsg = e.data; } I want to reuse this progress bar alot. And not necessarily for ProgressEvents, but when going through steps. For instance, I loop over a bunch of database inserts, and want to undate the progress etc. Here is a sample: public function updateDatabase(result:Object):void { var total:int = 0; var i:int = 0; var r:SQLResult; trace("updateDatabase called."); for each (var table:XML in this.queries.elements("table")) { var key:String = table.attribute("name"); if (result[key]) { send(TEvent.UpdateApplicationProgressStep, "Updating " + key); i = 1; total = result[key].length; for each (var row:Object in result[key]) { //now, we need to see if we already have this record. send(TEvent.UpdateApplicationProgress, { number:i, total: total } ); r = this.query("select * from " + key + " where server_id = '" + row.id + "'"); if (r.data == null) { //there is no entry with this id, make one. this.query(table.insert, row); } else { //it exists, so let's update. this.update(key, row); } i++; } } } } Everything works fine. That is, the listener functions are called and I get trace output like: updateDatabase called. Setting step:Updating project Setting Perc 25 Setting Perc 50 Setting Perc 75 Setting Perc 100 The issue is, only the very last percent and step is shown. that is, when it's all done, the progress bar jumps to 100% and shows the last step label. Does anyone know why this is? Thanks in advance for any help, Jason

    Read the article

  • Event notifications for Reporting Systems

    - by Marc Schluper
    The last couple of months I have been working on an application that allows people to browse a data mart. Nice, but nothing new. In this context I have an idea that I want to publish before anyone else patents it: event notifications. You see, reporting systems are not used as much as we’d like. Typically, users don’t know where to look for reports that might interest them. At best, there are some standard reports that people generate every so often, i.e. based on a time trigger. Or some reporting systems can be configured to send monthly reports around, for convenience. But apart from that, the reporting system is just sitting there, waiting for the rare curious user who makes the effort to dig a bit for treasures to be found. Wouldn’t it be great if there were data triggers? Imagine we could configure the reporting system to let us know when something interesting has happened. It would send us a message containing a link that would take us to the relevant section of the reporting system, showing a report with all the data pertaining to that event, preparing us for proper actions. Here in the North West this would really be great. You see, it rains here most of the time from October to June, so why even check the weather forecast? But sometimes, sometimes it snows. And sometimes the sun shines. So rather than me going to the weather site and seeing over and over again that it will be raining, making me think “why bother?” I’d like to configure the weather site so that it lets me know when the rain stops. Now, hopefully nobody has patented this idea already. Let me know.

    Read the article

  • ZFS Basics

    - by user12614620
    Stage 1 basics: creating a pool # zpool create $NAME $REDUNDANCY $DISK1_0..N [$REDUNDANCY $DISK2_0..N]... $NAME = name of the pool you're creating. This will also be the name of the first filesystem and, by default, be placed at the mountpoint "/$NAME" $REDUNDANCY = either mirror or raidzN, and N can be 1, 2, or 3. If you leave N off, then it defaults to 1. $DISK1_0..N = the disks assigned to the pool. Example 1: zpool create tank mirror c4t1d0 c4t2d0 name of pool: tank redundancy: mirroring disks being mirrored: c4t1d0 and c4t2d0 Capacity: size of a single disk Example 2: zpool create tank raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 Here the redundancy is raidz, and there are five disks, in a 4+1 (4 data, 1 parity) config. This means that the capacity is 4 times the disk size. If the command used "raidz2" instead, then the config would be 3+2. Likewise, "raidz3" would be a 2+3 config. Example 3: zpool create tank mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0 This is the same as the first mirror example, except there are two mirrors now. ZFS will stripe data across both mirrors, which means that writing data will go a bit faster. Note: you cannot create a mirror of two raidzs. You can create a raidz of mirrors, but to do that requires trickery.

    Read the article

  • Using PDO with MVC

    - by mister martin
    I asked this question at stackoverflow and received no response (closed as duplicate with no answer). I'm experimenting with OOP and I have the following basic MVC layout: class Model { // do database stuff } class View { public function load($filename, $data = array()) { if(!empty($data)) { extract($data); } require_once('views/header.php'); require_once("views/$filename"); require_once('views/footer.php'); } } class Controller { public $model; public $view; function __construct() { $this->model = new Model(); $this->view = new View(); // determine what page we're on $page = isset($_GET['view']) ? $_GET['view'] : 'home'; $this->display($page); } public function display($page) { switch($page) { case 'home': $this->view->load('home.php'); break; } } } These classes are brought together in my setup file: // start session session_start(); require_once('Model.php'); require_once('View.php'); require_once('Controller.php'); new Controller(); Now where do I place my database connection code and how do I pass the connection onto the model? try { $db = new PDO('mysql:host='.DB_HOST.';dbname='.DB_DATABASE.'', DB_USERNAME, DB_PASSWORD); $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch(PDOException $err) { die($err->getMessage()); } I've read about Dependency Injection, factories and miscellaneous other design patterns talking about keeping SQL out of the model, but it's all over my head using abstract examples. Can someone please just show me a straight-forward practical example?

    Read the article

  • iPhone Development: Use POST to submit a form

    - by Mario
    I've got the following html form: <form method="post" action="http://shk.ecomd.de/up.php" enctype="multipart/form-data"> <input type="hidden" name="id" value="12345" /> <input type="file" name="pic" /> <input type="submit" /> </form> And the following iPhone SDK Submit method: - (void)sendfile { UIImage *tempImage = [UIImage imageNamed:@"image.jpg"]; NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init]; [request setHTTPMethod:@"POST"]; [request setURL:[NSURL URLWithString:@"http://url..../form.php"]]; NSString *boundary = @"------------0xKhTmLbOuNdArY"; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary]; [request addValue:contentType forHTTPHeaderField: @"Content-Type"]; NSMutableData *body = [NSMutableData data]; [body appendData:[[NSString stringWithFormat:@"\r\n%@\r\n",boundary] dataUsingEncoding:NSASCIIStringEncoding]]; [body appendData:[@"Content-Disposition: form-data; name=\"id\"\r\n\r\n" dataUsingEncoding:NSASCIIStringEncoding]]; [body appendData:[@"12345" dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithFormat:@"\r\n%@\r\n",boundary] dataUsingEncoding:NSASCIIStringEncoding]]; [body appendData:[@"Content-Disposition: form-data; name=\"pic\"; filename=\"photo.png\"\r\n" dataUsingEncoding:NSASCIIStringEncoding]]; [body appendData:[@"Content-Type: application/octet-stream\r\n\r\n" dataUsingEncoding:NSASCIIStringEncoding]]; [body appendData:[NSData dataWithData:UIImageJPEGRepresentation(tempImage, 90)]]; [body appendData:[[NSString stringWithFormat:@"\r\n%@",boundary] dataUsingEncoding:NSASCIIStringEncoding]]; // setting the body of the post to the reqeust [request setHTTPBody:body]; NSError *error; NSURLResponse *response; NSData* result = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; NSString* aStr = [[[NSString alloc] initWithData:result encoding:NSASCIIStringEncoding] autorelease]; NSLog(@"Result: %@", aStr); [request release]; } This does not work, but I have no clue why. Can you please help me?!! What am I doing wrong?

    Read the article

  • JavaCC: How can one exclude a string from a token? (A.k.a. understanding token ambiguity.)

    - by java.is.for.desktop
    Hello, everyone! I had already many problems with understanding, how ambiguous tokens can be handled elegantly (or somehow at all) in JavaCC. Let's take this example: I want to parse XML processing instruction. The format is: "<?" <target> <data> "?>": target is an XML name, data can be anything except ?>, because it's the closing tag. So, lets define this in JavaCC: (I use lexical states, in this case DEFAULT and PROC_INST) TOKEN : <#NAME : (very-long-definition-from-xml-1.1-goes-here) > TOKEN : <WSS : (" " | "\t")+ > // WSS = whitespaces <DEFAULT> TOKEN : {<PI_START : "<?" > : PROC_INST} <PROC_INST> TOKEN : {<PI_TARGET : <NAME> >} <PROC_INST> TOKEN : {<PI_DATA : ~[] >} // accept everything <PROC_INST> TOKEN : {<PI_END : "?>" > : DEFAULT} Now the part which recognizes processing instructions: void PROC_INSTR() : {} { ( <PI_START> (t=<PI_TARGET>){System.out.println("target: " + t.image);} <WSS> (t=<PI_DATA>){System.out.println("data: " + t.image);} <PI_END> ) {} } Let's test it with <?mytarget here-goes-some-data?>: The target is recognized: "target: mytarget". But now I get my favorite JavaCC parsing error: !! procinstparser.ParseException: Encountered "" at line 1, column 15. !! Was expecting one of: !! Encountered nothing? Was expecting nothing? Or what? Thank you, JavaCC! I know, that I could use the MORE keyword of JavaCC, but this would give me the whole processing instruction as one token, so I'd had to parse/tokenize it further by myself. Why should I do that? Am I writing a parser that does not parse? The problem is (i guess): hence <PI_DATA> recognizes "everything", my definition is wrong. I should tell JavaCC to recognize "everything except ?>" as processing instruction data. But how can it be done? NOTE: I can only exclude single characters using ~["a"|"b"|"c"], I can't exclude strings such as ~["abc"] or ~["?>"]. Another great anti-feature of JavaCC. Thank you.

    Read the article

  • Making Modular, Reusable and Loosely Coupled MVC Components

    - by Dusan
    I am building MVC3 application and need some general guidelines on how to manage complex client side interaction between my components. Here is my definition of one component in general way: Component which has it's own controller, model and view. All of the component's logic is placed inside these three parts and component is sort of "standalone", it contains it's own form, data needed for interaction, updates itself with Ajax and so on. Beside this internal logic and behavior of the component, it needs to be able to "Talk" to the outside world. By this I mean it should provide data and events (sort of) so when this component gets embedded in pages can notify other components which then can update based on the current state and data. I have an idea to use client ViewModel (in java-script) which would hookup all relevant components on page and control interaction between them. This would make components reusable, modular - independent of the context in which they are used. How would you do this, I am a bit stuck as I do not know if this is a good approach and there is a technical possibility to achieve this using java-script/jquery. The confusing part is about update via Ajax, how to ensure that component is properly linked to ViewModel when component is Ajax updated (or even worse removed or dynamically added). Also, how should this ViewModel be constructed and which technicalities to use here and in components to work as synergy??? On the web, I have found the various examples of the similar approach, but they are oversimplified (even for dummies) or over specific and do not provide valuable resource or general solution for this kind of implementation. If you have some serious examples it would be, also, very helpful. Note: My aim is to make interactions between many components on the same page simpler and more robust and elegant.

    Read the article

  • Help setting up wireless in Ubuntu 13.04

    - by James
    I'm having problems connecting my WIFI in Ubuntu 13.04 . So I was wondering if filling in the data manually ie: the IPv4, IPv6, the SSID and BSSID info etc. I did try this before but maybe I put in the wrong data or maybe not enough. Would that make it work? I just don't know how to find out some of the data you need to put in? I'm new and it's confusing. Does anyone know the solution? Here is lspci: james@james-MM061:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 01) 00:1c.3 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 4 (rev 01) 00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 01) 00:1f.2 IDE interface: Intel Corporation 82801GBM/GHM (ICH7-M Family) SATA Controller [IDE mode] (rev 01) 00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 01) 03:00.0 Ethernet controller: Broadcom Corporation BCM4401-B0 100Base-TX (rev 02) 03:01.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller 03:01.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 19) 03:01.2 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 0a) 03:01.3 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev 05) 0b:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01) Computer information: Model: Dell MM061 Mobile Intel(R) 945GM Express Chipset Family [Display adapter] (2x)

    Read the article

< Previous Page | 739 740 741 742 743 744 745 746 747 748 749 750  | Next Page >