Search Results

Search found 7850 results on 314 pages for 'except'.

Page 113/314 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Ada and 'The Book'

    - by Phil Factor
    The long friendship between Charles Babbage and Ada Lovelace created one of the most exciting and mysterious of collaborations ever to have resulted in a technological breakthrough. The fireworks that created by the collision of two prodigious mathematical and creative talents resulted in an invention, the Analytical Engine, which went on to change society fundamentally. However, beyond that, we just don't know what the bulk of their collaborative work was about:;  it was done in strictest secrecy. Even the known outcome of their friendship, the first programmable computer, was shrouded in mystery. At the time, nobody, except close friends and family, had any idea of Ada Byron's contribution to the invention of the ‘Engine’, and how to program it. Her great insight was published in August 1843, under the initials AAL, standing for Ada Augusta Lovelace, her title then being the Countess of Lovelace. It was contained in a lengthy ‘note’ to her translation of a publication that remains the best description of Babbage's amazing Analytical Engine. The secret identity of the person behind those enigmatic initials was finally revealed by Prince de Polignac who, seventy years later, wrote to Ada's daughter to seek confirmation that her mother had, indeed, been the author of the brilliant sentences that described so accurately how Babbage's mechanical computer could be programmed with punch-cards. L.F. Menabrea's paper on the Analytical Engine first appeared in the 'Bibliotheque Universelle de Geneve' in October 1842, and Ada translated it anonymously for Taylor's 'Scientific Memoirs'. Charles Babbage was surprised that she had not written an original paper as she already knew a surprising amount about the way the machine worked. He persuaded her to at least write some explanatory notes. These notes ended up extending to four times the length of the original article and represented the first published account of how a machine could be programmed to perform any calculation. Her example of programming the Bernoulli sequence would have worked on the Analytical engine had the device’s construction been completed, and gave Ada an unassailable claim to have invented the art of programming. What was the reason for Ada's secrecy? She was the only legitimate child of Lord Byron, who was probably the best known celebrity of the age, so she was already famous. She was a senior aristocrat, with titles, a fortune in money and vast estates in the Midlands. She had political influence, and was the cousin of Lord Melbourne, who was the Prime Minister at that time. She was friendly with the young Queen Victoria. Her mathematical activities were a pastime, and not one that would be considered by others to be in keeping with her roles and responsibilities. You wouldn't dare to dream up a fictional heroine like Ada. She was dazzlingly beautiful and talented. She could speak several languages fluently, and play some musical instruments with professional skill. Contemporary accounts refer to her being 'accomplished in science, art and literature'. On top of that, she was a brilliant mathematician, a talent inherited from her mother, Annabella Milbanke. In her mother's circle of literary and scientific friends was Charles Babbage, and Ada's friendship with him dates from her teenage zest for Mathematics. She was one of the first people he'd ever met who understood what he had attempted to achieve with the 'Difference Engine', and with whom he could converse as intellectual equals. He arranged for her to have an education from the most talented academics in the country. Ada melted the heart of the cantankerous genius to the point that he became a faithful and loyal father-figure to her. She was one of the very few who could grasp the principles of the later, and very different, ‘Analytical Engine’ which was designed from the start to tackle a variety of tasks. Sadly, Ada Byron's life ended less than a decade after completing the work that assured her long-term fame, in November 1852. She was dying of cancer, her gambling habits had caused her to run up huge debts, she'd had more than one affairs, and she was being blackmailed. Her brilliant but unempathic mother was nursing her in her final illness, destroying her personal letters and records, and repaying her debts. Her husband was distraught but helpless. Charles Babbage, however, maintained his steadfast paternalistic friendship to the end. She appointed her loyal friend to be her executor. For years, she and Babbage had been working together on a secret project, known only as 'The Book'. We have a clue to what it was in a letter written by her nine years earlier, on 11th August 1843. It was a joint project by herself and Lord Lovelace, her husband, and was intended to involve Babbage's 'undivided energies'. It involved 'consulting your Engine' (it required Babbage’s computer). The letter gives no hint about the project except for the high-minded nature of its purpose, and its highly mathematical nature.  From then on, the surviving correspondence between the two gives only veiled references to 'The Book'. There isn't much, since Babbage later destroyed any letters that could have damaged her reputation within the Establishment. 'I cannot spare the book today, which I am very sorry for. At the moment I want it for constant reference, but I think you can have it tomorrow' (Oct 1844)  And 'I will send you the book directly, and you can say, when you receive it, how long you will want to keep it'. (Nov 1844)  The two of them were obviously intent on the work: She writes, four years later, 'I have an engagement for Wednesday which will prevent me from attending to your wishes about the book' (Dec 1848). This was something that they both needed to work on, but could not do in parallel: 'I will send the book on Tuesday, and it can be left with you till Friday' (11 Feb 1849). After six years work, it had been so well-handled that it was beginning to fall apart: 'Don't forget the new cover you promised for the book. The poor book is very shabby and wants one' (20 Sept 1849). So what was going on? The word 'book' was not a code-word: it was a real book, probably a 'printer's blank', plain paper, but properly bound so printers and publishers could show off how the published work might look. The hints from the correspondence are of advanced mathematics. It is obvious that the book was travelling between them, back and forth, each one working on it for less than a week before passing it back. Ada and her husband were certainly involved in gambling large sums of money on the horses, and so most biographers have concluded that the three of them were trying to calculate the mathematical odds on the horses. This theory has three large problems. Firstly, Ada's original letter proposing the project refers to its high-minded nature. Babbage was temperamentally opposed to gambling and would scarcely have given so much time to the project, even though he was devoted to Ada. Secondly, Babbage would have very soon have realized the hopelessness of trying to beat the bookies. This sort of betting never attracts his type of intellectual background. The third problem is that any work on calculating the odds on horses would not need a well-thumbed book to pass back and forth between them; they would have not had to work in series. The original project was instigated by Ada, along with her husband, William King-Noel, 1st Earl of Lovelace. Charles Babbage was invited to join the project after the couple had come up with the idea. What could William have contributed? One might assume that William was a Bertie Wooster character, addicted only to the joys of the turf, but this was far from the truth. He was a scientist, a Cambridge graduate who was later elected to be a Fellow of the Royal Society. After Eton, he went to Trinity College, Cambridge. On graduation, he entered the diplomatic service and acted as secretary under Lord Nugent, who was Lord Commissioner of the Ionian Islands. William was very friendly with Babbage too, able to discuss scientific matters on equal terms. He was a capable engineer who invented a process for bending large timbers by the application of steam heat. He delivered a paper to the Institution of Civil Engineers in 1849, and received praise from the great engineer, Isambard Kingdom Brunel. As well as being Lord Lieutenant of the County of Surrey for most of Victoria's reign, he had time for a string of scientific and engineering achievements. Whatever the project was, it is unlikely that William was a junior partner. After Ada's death, the project disappeared. Then, two years later, Babbage, through one of his occasional outbursts of temper, demonstrated that he was able to decrypt one of the most powerful of secret codes, Vigenère's autokey cipher.  All contemporary diplomatic and military messages used a variant of this cipher. Babbage had made three important discoveries, namely, the mathematical law of this cipher, the principle of the key periodicity, and the technique of the symmetry of position. The technique is now known as the Kasiski examination, also called the Kasiski test, but Babbage got there first. At one time, he listed amongst his future projects, the writing of a book 'The Philosophy of Decyphering', but it never came to anything. This discovery was going to change the course of history, since it was used to decipher the Russians’ military dispatches in the Crimean war. Babbage himself played a role during the Crimean War as a cryptographical adviser to his friend, Rear-Admiral Sir Francis Beaufort of the Admiralty. This is as much as we can be certain about in trying to make sense of the bulk of the time that Charles Babbage and Ada Lovelace worked together. Nine years of intensive work, involving the 'Engine' and a great deal of mathematics and research seems to have been lost: or has it? I've argued in the past http://www.simple-talk.com/community/blogs/philfactor/archive/2008/06/13/59614.aspx that the cracking of the Vigenère autokey cipher, was a fundamental motive behind the British Government's support and funding of the 'Difference Engine'. The Duke of Wellington, whose understanding of the military significance of being able to read enemy dispatches, was the most steadfast advocate of the project. If the three friends were actually doing the work of cracking codes by mathematical techniques that used the techniques of key periodicity, and symmetry of position (the use of a book being passed quickly to and fro is very suggestive), intending to then use the 'Engine' to do the routine cracking of each dispatch, then this is a rather different story. The project was Ada and William's idea. (William had served in the diplomatic service and would be familiar with the use of codes). This makes Ada Lovelace the initiator of a project which, by giving both Britain, and probably the USA, a diplomatic and military advantage in the second part of the Nineteenth century, changed world history. Ada would never have wanted any credit for cracking the cipher, and developing the method that rendered all contemporary military and diplomatic ciphering techniques nugatory; quite the reverse. And it is clear from the gaps in the record of the letters between the collaborators that the evidence was destroyed, probably on her request by her irascible but intensely honorable executor, Charles Babbage. Charles Babbage toyed with the idea of going public, but the Crimean war put an end to that. The British Government had a valuable secret, and intended to keep it that way. Ada and Charles had quite often discussed possible moneymaking projects that would fund the development of the Analytic Engine, the first programmable computer, but their secret work was never in the running as a potential cash cow. I suspect that the British Government was, even then, working on the concealment of a discovery whose value to the nation depended on it remaining so. The success of code-breaking in the Crimean war, and the American Civil war, led to the British and Americans  subsequently giving much more weight and funding to the science of decryption. Paradoxically, this makes Ada's contribution even closer to the creation of Colossus, the first digital computer, at Bletchley Park, specifically to crack the Nazi’s secret codes.

    Read the article

  • What are the reasons why Clojure is hyped and PicoLisp widely ignored?

    - by Thorsten
    I recently discovered the Lisp family of programming languages, and it's definitely one of the more diverse and widespread families in the programming language world. I like Elisp because that most wonderful tool Emacs is an Elisp interpreter. But I was looking for one more Lisp dialect to learn and thought Clojure would be the obvious choice nowadays - until I discovered the well hidden gem PicoLisp. That must be the most intelligent programming environment I have ever seen, like taking the best ideas from Lisp and Smalltalk and adding performance and practicability - and the beauty of parsimony. There is even an Emacs-mode for it. PicoLisp must be the productivity world champion when it comes to building business applications with database and web-client - and that's a very common task. It seems that throwing more and more hardware cores at your PicoLisp application makes it faster and faster, and the database is very performant anyway. However, reactions to PicoLisp in in general mailing-lists etc. are almost hostile (envy?), and there is absolutely no hype and very little publicity (ie not one book published). Are there real justified reasons for this (except the vast amount of java-libs accessible by Clojure, I know that one)? Or is the mainstream it getting wrong again (see C vs Lisp, Java vs Smalltalk, Windows vs Linux) and will come to the conclusion 10 years later that the JVM was good as in between solution, but a really fast Lisp interpreter on multicore machines is much better and allows much cleaner concepts? PS 1: Please note: I'm not interested in Scheme or any Common Lisp dialect, although they might be fine languages. It's just PicoLisp vs Clojure. PS 2: another thing I like about PicoLisp is its similarity to Elisp in certain aspects (both are descendants from MacLisp?) - it's easier to learn two similar languages. There is so much "dynamic binding bashing" on the web, but two of the most appealing Lisp applications use it.

    Read the article

  • Android “open for embedded”? Must-read Ars Technica article

    - by terrencebarr
    A few days ago ars technica published an article “Google’s iron grip on Android: Controlling open source by any means necessary”. If you are considering Android for embedded this article is a must-read to understand the severe ramifications of Google’s tight (and tightening) control on the Android technology and ecosystem. Some quotes from the ars technica article: “Android is open – except for all the good parts“ “Android actually falls into two categories: the open parts from the Android Open Source Project (AOSP) … and the closed source parts, which are all the Google-branded apps” “Android open source apps … turn into abandonware by moving all continuing development to a closed source model.” “Joining the OHA requires a company to sign its life away and promise to not build a device that runs a competing Android fork.” “Google Play Services is a closed source app owned by Google … to turn the “Android App Ecosystem” into the “Google Play Ecosystem” “You’re allowed to contribute to Android and allowed to use it for little hobbies, but in nearly every area, the deck is stacked against anyone trying to use Android without Google’s blessing“ Compare this with a recent Wired article “Oracle Makes Java More Relevant Than Ever”: “Oracle has actually opened up Java even more — getting rid of some of the closed-door machinations that used to be part of the Java standards-making process. Java has been raked over the coals for security problems over the past few years, but Oracle has kept regular updates coming. And it’s working on a major upgrade to Java, due early next year.” Cheers, – Terrence Filed under: Embedded, Mobile & Embedded Tagged: Android, embedded, Java Embedded, Open Source

    Read the article

  • Safari Mobile Multi-Line <Select> aka GWT Multi-Line ListBox

    - by McTrafik
    Hi guys. Working on a webapp here that must run on the iPad (so, Safari Mobile). I have this code that works fine in just about anything except iPad: <select class="gwt-ListBox" size="12" multiple="multiple"> <option value="Bleeding Eyelashes">Bleeding Eyelashes</option> <option value="Smelly Pupils">Smelly Pupils</option> <option value="Bushy Eyebrows">Bushy Eyebrows</option> <option value="Green Vessels">Green Vessels</option> <option value="Sucky Noses">Sucky Noses</option> </select> What it's supposed to look like is a box with 12 lines ans 5 of them filled up. It works fine in FF, IE, Chrome, Safari Win. But, when I open it on iPad, it's just a single line! Styling it with CSS doesn't work. It just makes the single line bigger if I set the height. Is there a way to make it behave the same way as in normal browsers, or do I nave to make a custom component? Thanks.

    Read the article

  • How do I roll back to the shipped version of Thunderbird?

    - by kallakafar
    I was using thunderbird v15.0 on ubuntu 12.04 LTS till now, and have the lightning extension installed to manage calendar within thunderbird application. everything was working fine until i decided to update thunderbird to the latest version 16.0 from ubuntu repository. installation was successful, and the profile everything was taken care of perfectly, except that now lightning is not working - it is disabled as lightning v1.7 is NOT compatible with latest thunderbird v16 yet. As a result i am at loss with all my scheduling. now, i would like to go back to thunderbird v15 so that i can use lightning. ubuntu repository only gives TB v16 now. on mozilla site, they are still giving v15 for linux, so i downloaded the tarball and uncompressed using command line. now i have a folder called thunderbird. there are no readme/ configuration files. there are following 'executable files' inside this folder: crashreporter, mozilla-remote-client, plugin-container, thunderbird and thunderbird-bin. i tried invoking thunderbird and thunderbird-bin from command line using sudo, still nothing is opening up. i have execute permissions for this folder contents. i m quite new to linux. please let me know why i m not able to launch thunderbird. did i install it incorrectly? please let me know if i can get a .deb package for TB v15.

    Read the article

  • Oracle SQL Developer version 3.2.2 Released

    - by thatjeffsmith
    This is another maintenance release, but I don’t want to minimize the work done in either the 3.2.1 or the 3.2.2 editions. The two releases include more than 400 bug fixes. Version 3.2 should be rocking and rolling and good to go while we work on the next major release! You can find the downloads and bug fixes in the normal places: Download 3.2.2 Bug fixes Connection Names If you downloaded and used version 3.2.1 and noticed some of your connection names were no longer valid due to ‘special’ characters, we’ve loosed our restrictions a bit for 3.2.2. You can now go back to using spaces and hyphens in your connection names. periods, spaces, hyphens should now all work More Copy & Paste Stuff While fixing a bug, the developer decided to also enhance the feature while he was in the code. I love seeing this happen organically. No one is sitting over their shoulder with the red magic marker. No, I’m too far away to do that except on very special days So here’s a ‘trick’ – if you want to copy cells from your grids, just drag the selected cells to the worksheet/editor. You’ll get a comma delimited list – very handy! Select cells, drag and drop up to the worksheet – Voila! Comma separated values

    Read the article

  • Why does my PowerBook display “Fixing recursive fault but reboot is needed!” and stop booting?

    - by Blacklight Shining
    I have an old PowerBook G4 that worked (more or less) fine with a previous installation of Ubuntu Desktop 12.04. A few days ago I decided to install Ubuntu Server instead, and got a copy of Ubuntu Server 12.10. The installation seemed to complete successfully, but now, whenever I try to boot the system, it simply halts at some point after I unlock the hard disk. There is a lot of text on the screen (which is normal for me during a boot, except now it's mostly errors and debug information), the last of which is this: [ 26.338228] Fixing recursive fault but reboot is needed! Pressing control command power to force a reboot yields exactly the same results. A search for the error message turned up many temporary solutions involving kernel parameters, but none of them have worked for me. I don't think I can remove the default set of parameters (which I think is quiet splash), but I can pass additional parameters on boot. I've tried booting on AC and battery power, as well as using these combinations of kernel parameters while on battery power: acpi=enable pci=noacpi pci=assign-busse acpi=ht acpi=off nomodeset nomodeset acpi=off Why am I getting this error and how can I fix it?

    Read the article

  • LLBLGen Pro and JSON serialization

    - by FransBouma
    I accidentally removed a reply from my previous blogpost, and as this blog-engine here at weblogs.asp.net is apparently falling apart, I can't re-add it as it thought it would be wise to disable comment controls on all posts, except new ones. So I'll post the reply here as a quote and reply on it. 'Steven' asks: What would the future be for LLBLGen Pro to support JSON for serialization? Would it be worth the effort for a LLBLGenPro user to bother creating some code templates to produce additional JSON serializable classes? Or just create some basic POCO classes which could be used for exchange of client/server data and use DTO to map these back to LLBGenPro ones? If I understand the work around, it is at the expense of losing xml serialization. Well, as described in the previous post, to enable JSON serialization, you can do that with a couple of lines and some attribute assignments. However, indeed, the attributes might make the XML serialization not working, as described in the previous blogpost. This is the case if the service you're using serializes objects using the DataContract serializer: this serializer will give up to serialize the entity objects to XML as the entity objects implement IXmlSerializable and this is a no-go area for the DataContract serializer. However, if your service doesn't use a DataContract serializer, or you serialize the objects manually to Xml using an xml serializer, you're fine. When you want to switch to Xml serializing again, instead of JSON in WebApi, and you have decorated the entity classes with the data-contract attributes, you can switch off the DataContract serializer, by setting a global configuration setting: var xml = GlobalConfiguration.Configuration.Formatters.XmlFormatter; xml.UseXmlSerializer = true; This will make the WebApi use the XmlSerializer, and run the normal IXmlSerializable interface implementation.

    Read the article

  • Have you changed your coding style recently? It wasn't hard wasn't it?

    - by Ernelli
    I've used to write code in C-like languages using the Allman style, regarding the position of braces. void foo(int bar) { if(bar) { //... } else return; //... } Now the last two years I have been working mostly in JavaScript and when we adopted jslint as part of our QA process, I had to adopt to the Crockford way of doing things. So I had to change the coding style into: function foo(bar) { if (bar) { //... } else { return; } //... } Now apart from comparing a C/C++ example with JavaScript, I must say that my JavaScript-Crockford-coding style now has spread into my C/C++/Java coding when I revise old projects and work on code in those languages that for example has no problem with single line statements or ambiguous newline insertion. I used to consider the later format very awkward, I have never had any problems with adapting my coding style to the one chosen by my predecessors, except for when I was a Junior developer mostly being the solve developer on legacy projects and the first thing I did was to change the indenting style. But now after a couple of months I consider the Allman style a little bit too spacious and feel more comfortable with the K&R-like style. Have you changed your coding style during your career?

    Read the article

  • Can't install TeamViewer on Ubuntu 12.04

    - by Roel
    I have a Dell Inspiron 6400 laptop. I was a Windows user but I want to start using Ubuntu from now on. Everything goes well except installation of the teamviewer. I have downloaded the file from the official website. There, it says that just double click on the *.deb file and it will install it automatically. Well, it gives an error: Failed to remove essential system package, You requested to remove a package which is an essential part of your system. Then I tried the second way of installation, which is on the terminal. I types as suggested: sudo dpkg -i teamviewer_linux.deb. It started installing but later on failed. HEre is the copy of the screen. Preparing to replace teamviewer7:i386 7.0.9377 (using teamviewer_linux.deb) ... Unpacking replacement teamviewer7:i386 ... dpkg: dependency problems prevent configuration of teamviewer7:i386: teamviewer7:i386 depends on bash (>= 3.0). teamviewer7:i386 depends on libc6 (>= 2.7). teamviewer7:i386 depends on libasound2. teamviewer7:i386 depends on zlib1g. teamviewer7:i386 depends on libxext6. dpkg: error processing teamviewer7:i386 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: teamviewer7:i386 I have already checked these dependent files in Synaptic and they are all installed. What am I doing wrong?

    Read the article

  • Trying to install apache 2.4.10 with openssl 1.0.1i

    - by AlexMA
    I need to install apache 2.4.10 using openssl 1.0.1i. I compiled openssl from source with: $ ./config \ --prefix=/opt/openssl-1.0.1e \ --openssldir=/opt/openssl-1.0.1e $ make $ sudo make install and apache with: ./configure --prefix=/etc/apache2 \ --enable-access_compat=shared \ --enable-actions=shared \ --enable-alias=shared \ --enable-allowmethods=shared \ --enable-auth_basic=shared \ --enable-authn_core=shared \ --enable-authn_file=shared \ --enable-authz_core=shared \ --enable-authz_groupfile=shared \ --enable-authz_host=shared \ --enable-authz_user=shared \ --enable-autoindex=shared \ --enable-dir=shared \ --enable-env=shared \ --enable-headers=shared \ --enable-include=shared \ --enable-log_config=shared \ --enable-mime=shared \ --enable-negotiation=shared \ --enable-proxy=shared \ --enable-proxy_http=shared \ --enable-rewrite=shared \ --enable-setenvif=shared \ --enable-ssl=shared \ --enable-unixd=shared \ --enable-ssl \ --with-ssl=/opt/openssl-1.0.1i \ --enable-ssl-staticlib-deps \ --enable-mods-static=ssl make (would run sudo make install next but I get an error) I'm essentially following the guide here except with newer slightly newer versions. My problem is I get a linker error when I run make for apache: ... Making all in support make[1]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' make[2]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' /usr/share/apr-1.0/build/libtool --silent --mode=link x86_64-linux-gnu-gcc -std=gnu99 -pthread -L/opt/openssl-1.0.1i/lib -lssl -lcrypto \ -o ab ab.lo /usr/lib/x86_64-linux-gnu/libaprutil-1.la /usr/lib/x86_64-linux-gnu/libapr-1.la -lm /usr/bin/ld: /opt/openssl-1.0.1i/lib/libcrypto.a(dso_dlfcn.o): undefined reference to symbol 'dlclose@@GLIBC_2.2.5' I tried the answer here, but no luck. I would prefer to just use aptitude, but unfortunately the versions I need aren't available yet. If anyone knows how to fix the linker problem (or what I think is a linker problem), or knows of a better way to tell apache to use a newer openssl, it would be greatly appreciated; I've got apache 1.0.1i working otherwise.

    Read the article

  • Nautilus crashes after Ubuntu Tweak Package Cleaner [fixed]

    - by Ka7anax
    Few days ago I started having some problems with nautilus. Basically when I'm trying to get into a folder it crashes. It's not happening all the time, but in 85% it does... Sometimes, after the crash all my desktop icons are also gone. The only thing that I think causes this is Ubuntu Tweak - I'm not sure, but the issues started after I did the Package cleaner from Ubuntu Tweaks... Any ideas? ------- EDIT 2 - IMPORTANT !!! ---------- It seems I fixed this problem doing these: 1) I uninstall this nautilus script - http://mundogeek.net/nautilus-scripts/#nautilus-send-gmail 2) I installed nautilus elementary So far is back to normal... If anything bad happens again I will come back! -------- EDIT 1 ---------- First time, after running the command (nautilus --quit; nautilus --no-desktop) 3 times all the system crashed (except the mouse, I could move the mouse). After restart I run it and obtain this: ----- Initializing nautilus-gdu extension Initializing nautilus-dropbox 0.6.7 (nautilus:2966): GConf-CRITICAL **: gconf_value_free: assertion value != NULL' failed (nautilus:2966): GConf-CRITICAL **: gconf_value_free: assertionvalue != NULL' failed Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory Please ask your system administrator to enable user sharing. and then this: cristi@cris-laptop:~$ nautilus --quit; nautilus --no-desktop (nautilus:3810): Unique-DBus-WARNING **: Error while sending message: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

    Read the article

  • How to programatically retarget animations from one skeleton to another?

    - by Fraser
    I'm trying to write code to transfer animations that were designed for one skeleton to look correct on another skeleton. The source animations consist only of rotations except for translations on the root (they're the mocap animations from the CMU motion capture database). Many 3D applications (eg Maya) have this facility built-in, but I'm trying to write a (very simple) version of it for my game. I've done some work on bone mapping, and because the skeletons are hierarchically similar (bipeds), I can do 1:1 bone mapping for everything but the spine (can work on that later). The problem, however, is that the base skeleton/bind poses are different, and the bones are different scales (shorter/longer), so if I just copy the rotation straight over it looks very strange: I've tried multiplying by the original bone's absolute rotation, then by the inverse of the target, and vice-versa... kind of a shot in the dark, and indeed it didn't work. (Tried relative transformations too)... I'm not sure where to go from here, so if anyone has any resources on stuff like this (papers, source code, etc), that would be really helpful. Thanks!

    Read the article

  • Methodology behind fetching large XML data sets in pieces

    - by Jerry Dodge
    I am working on an HTTP Server in Delphi which simply sends back a custom XML dataset. I am not following any type of standard formatting, such as SOAP. I have the system working seamlessly, except one small flaw: When I have a very large dataset to send back to the client, it might take up to 2 minutes for all the data to be transferred. The HTTP Server I'm building is essentially an XML Data based API around a database, implementing the common business rule - therefore, the requests are specific to the data behind the system. When, for example, I fetch a large set of product data, I would like to break this down and send it back piece by piece. However, a single HTTP request calls for a single response. I can't necessarily keep feeding the client with multiple different XML packets unless the client explicitly requests it. I don't have any session management, but rather an API Key. I know if I had sessions, I could keep-alive a dataset temporarily for a client, and they could request bits and pieces of it. However, without session management, I would have to execute the SQL query multiple times (for each chunk of data), and in the mean-time, if that data changes, the "pages" might get messed up, therefore causing items to show on the wrong pages, after navigating to a different page. So how is this commonly handled? What's the methodology behind breaking down a large XML dataset into chunks to save the load?

    Read the article

  • Getting NLog Running in Partial Trust

    - by grant.barrington
    To get things working you will need to: Strong name sign the assembly Allow Partially Trusted Callers In the AssemblyInfo.cs file you will need to add the assembly attribute for “AllowPartiallyTrustedCallers” You should now be able to get NLog working as part of a partial trust installation, except that the File target won’t work. Other targets will still work (database for example)   Changing BaseFileAppender.cs to get file logging to work In the directory \Internal\FileAppenders there is a file called “BaseFileAppender.cs”. Make a change to the function call “TryCreateFileStream()”. The error occurs here: Change the function call to be: private FileStream TryCreateFileStream(bool allowConcurrentWrite) { FileShare fileShare = FileShare.Read; if (allowConcurrentWrite) fileShare = FileShare.ReadWrite; #if DOTNET_2_0 if (_createParameters.EnableFileDelete && PlatformDetector.GetCurrentRuntimeOS() != RuntimeOS.Windows) { fileShare |= FileShare.Delete; } #endif #if !NETCF try { if (PlatformDetector.IsCurrentOSCompatibleWith(RuntimeOS.WindowsNT) || PlatformDetector.IsCurrentOSCompatibleWith(RuntimeOS.Windows)) { return WindowsCreateFile(FileName, allowConcurrentWrite); } } catch (System.Security.SecurityException secExc) { InternalLogger.Error("Security Exception Caught in WindowsCreateFile. {0}", secExc.Message); } #endif return new FileStream(FileName, FileMode.Append, FileAccess.Write, fileShare, _createParameters.BufferSize); }   Basically we wrap the call in a try..catch. If we catch a SecurityException when trying to create the FileStream using WindowsCreateFile(), we just swallow the exception and use the native System.Io.FileStream instead.

    Read the article

  • /usr/lib/cups/backend/hp has failed

    - by edtechdev
    Ever since 10.04, I can't print to an HP laserjet p3005. I'm even using an entirely different computer now with a fresh install of 10.10. I've tried with and without the latest hplip. Recently, sometimes I can get it to print a few things, but eventually it always fails (usually when printing a pdf from the document viewer (also doesn't work with adobe pdf reader)). Sometimes it fails so bad the printer gives an error saying it needs to be turned off and on again. I can't seem to find a fix anywhere, I've googled all over the past year and tried every fix I could find. It does say that the /usr/lib/cups/backend/hp has failed. It also doesn't make a difference if I create the printer using hp-setup or ubuntu's own printing control panel. I delete and re-create the printer, no difference eventually. I use the default printer settings or custom settings, no difference. I can print perfectly find to a networked printer at home - an HP officejet 6310. It seems to be networked HP printers at work that I can't print to anymore (except occasionally right after re-installing the printer driver). What's the recommended way to install HP printer drivers and reset or clean out everything from before. Or where are the right logs to read or debug commands to do to find out what may be the real cause of the printing problems?

    Read the article

  • /usr/lib/cups/backend/hp has failed with an HP LaserJet p3005

    - by edtechdev
    Ever since 10.04, I can't print to an HP laserjet p3005. I'm even using an entirely different computer now with a fresh install of 10.10. I've tried with and without the latest hplip. Recently, sometimes I can get it to print a few things, but eventually it always fails (usually when printing a pdf from the document viewer (also doesn't work with adobe pdf reader)). Sometimes it fails so bad the printer gives an error saying it needs to be turned off and on again. I can't seem to find a fix anywhere, I've googled all over the past year and tried every fix I could find. It does say that the /usr/lib/cups/backend/hp has failed. It also doesn't make a difference if I create the printer using hp-setup or ubuntu's own printing control panel. I delete and re-create the printer, no difference eventually. I use the default printer settings or custom settings, no difference. I can print perfectly find to a networked printer at home - an HP officejet 6310. It seems to be networked HP printers at work that I can't print to anymore (except occasionally right after re-installing the printer driver). What's the recommended way to install HP printer drivers and reset or clean out everything from before. Or where are the right logs to read or debug commands to do to find out what may be the real cause of the printing problems?

    Read the article

  • Whats consuming HDD Space

    - by Umair Mustafa
    I have single partition of 92GB in which I installed Ubuntu 12.04. And for some Unknown reason a message pop ups saying that I only have 1GB of HDD space left. I ran command sudo du -hscx * on / and /home /home gave me this result 4.0K C:\nppdf32Log\debuglog.txt 0 convertedvideo.avi 176M Desktop 16K Documents 169M Downloads 4.0K examples.desktop 17M file.txt 4.0K Music 984K Pictures 4.0K Public 320K Red Hat 6.iso 2.5M syslog-ng_3.3.6.tar.gz 4.0K Templates 8.0K terminal.png 1.2M Thunderbird Attachments 698M ubuntu10.04LTS.iso 16K Ubuntu One 4.0K Untitled Folder 4.0K Videos 21G VirtualBox VMs 22G total And / gave me this result 81G home 0 initrd.img 0 initrd.img.old 833M lib 16K lost+found 68K media 4.0K mnt 260M opt du: cannot access `proc/8339/task/8339/fd/4': No such file or directory du: cannot access `proc/8339/task/8339/fdinfo/4': No such file or directory du: cannot access `proc/8339/fd/4': No such file or directory du: cannot access `proc/8339/fdinfo/4': No such file or directory 0 proc 640K root 908K run 8.6M sbin 4.0K selinux 4.0K srv 0 sys 148K tmp 3.3G usr 436M var 0 vmlinuz 0 vmlinuz.old 86G total If you look at the result returned by / it shows that /home is consuming 81GB but on the other hand /home returns only 22GB. I cant figure out whats consuming the HDD. I have not installed anything except Virtual Machines Perpetrator found using Disk Usage Analyzer

    Read the article

  • hp pavilion g6 1250 with a BCM 4313 doesn't see any wireless networks

    - by Ahmed Kotb
    i have tried using ubuntu 10.04 and ubuntu 11.10 and both have the same problem the driver is detected by the additional propriety drivers wizard and after installation, ubuntu can't see except on wireless network which is not mine (and i can't connect to it as it is secured) There are plenty of wireless networks around me but ubuntu can't detect them and if i tried to connect to one of them as if it was hidden connection time out. the command lspci -nvn | grep -i net gives 04:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller [14e4:4727] (rev 01) 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) iwconfig gives lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off i guess it is something related to Broadcom driver .. but i don't know , any help will be appreciated UPDATE: ok i installed a new copy of 11.10 to remove the effect of any trials i have made i followed the link (http://askubuntu.com/q/67806) as suggested all what i have done now is trying the command lsmod | grep brc and it gave me the following brcmsmac 631693 0 brcmutil 17837 1 brcmsmac mac80211 310872 1 brcmsmac cfg80211 199587 2 brcmsmac,mac80211 crc_ccitt 12667 1 brcmsmac then i blacklisted all the other drivers as mentioned in the link the wireless is still disabled.. in the last installation installing the Brodcom STA driver form the additional drivers enabled the menu but as i have said before it wasn't able to connect or even get a list of available networks so what should i do now ? the output of command rfkill list all rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • apt sources.list disabled on upgrade to 12.04

    - by user101089
    After a do-release-upgrade, I'm now running ubuntu 12.04 LTS, as indicated below > lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise However, I find that all the entries in my /etc/apt/sources.list were commented out except for one. QUESTION: Is it safe for me to edit these, replacing the old 'lucid' with 'precise' in what is shown below? ## unixteam source list # deb http://debian.yorku.ca/ubuntu/ precise main main/debian-installer restricted restricted/debian-installer # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ precise main restricted # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu/ lucid-updates main restricted # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ lucid-updates main restricted # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu/ precise universe # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ precise universe # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu/ precise multiverse # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu/ precise multiverse # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu lucid-security main restricted # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu lucid-security main restricted # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu lucid-security universe # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu lucid-security universe # disabled on upgrade to precise # deb http://debian.yorku.ca/ubuntu lucid-security multiverse # disabled on upgrade to precise # deb-src http://debian.yorku.ca/ubuntu lucid-security multiverse # disabled on upgrade to precise # R sources # see http://cran.us.r-project.org/bin/linux/ubuntu/ for details # deb http://probability.ca/cran/bin/linux/ubuntu lucid/ # disabled on upgrade to precise deb http://archive.ubuntu.com/ubuntu precise main multiverse universe

    Read the article

  • Booting Ubuntu Failure : error: attempt to read or write outside of disk 'hd0'

    - by never4getthis
    I have installed ubuntu 12.10 in a WD external harddrive (320GB). This is a complete installation, not live USB. When I plug it in my HP desktop I go to the BIOS settings and boot off the harddrive, everything work perfectly -as it should. Now this works on everysingle computer and laptop in my house (all HP) -except for ONE. My HP ProBook 4530s. When I select to boot of the USB I get the Message: error: attempt to read or write outside of disk 'hd0' Now, I have removed the hdd from my laptop and the external drive is the ONLY drive plugged in. Bellow is a picture of the screen. After the message I navigate to ls / (as shown below): After here I try to acces other folders under ls /, for example, I try to go to ls /boot to get to the grub folder. Then I get the same message as before: as shown by the image below: The only folders I can access without getting the message again are /home, /run and /usr. So how do I: Boot Ubuntu from GRUB2 (this screen) manually Set to automatically boot Ubuntu If possible an explanation for this problem Thanks!

    Read the article

  • How do I install the newest Flash beta for Minefield on 64-bit Ubuntu?

    - by Øsse
    Hi, I'm using a fully updated Ubuntu 10.10 64-bit which is pretty much bog standard except I'm using Minefield from the Mozilla daily PPA in addition to Firefox as provided by Ubuntu. I want to try the newest beta of Flash (10.3 as of writing). The installation instructions simply say "drop libflashplayer.so into the plugin folder of your browser". This the 32-bit version. Currently I'm using Flash as provided by the package flashplugin-installer (ver. 10.2.152.27ubuntu0.10.10.1). Going to about:plugins in Minefield/Firefox says the version of Flash I'm running is 10.2 r152 and the file responsible is npwrapper.libflashplayer.so. I have two files with that name on my system. One is /usr/share/ubufox/plugins/npwrapper.libflashplayer.so which is a broken link to /usr/lib/flashplugin-installer/npwrapper.libflashplayer.so. The other is /var/lib/flashplugin-installer/npwrapper.libflashplayer.so (note var instead of usr). I also have a file simply called libflashplayer.so in /usr/lib/flashplugin-installer/. So it seems Firefox/Minefield gets its Flash plugin from a file that doesn't exist, and replacing libflashplayer.so with the one in the archive from Adobe has no effect. Since I want to try the 32-bit version I have to use the wrapper. The only way I know how is through the flashplugin-installer package. How would I go about installing the newest 32-bit beta Flash if possible at all? And where is "the plugin folder of my browser"?

    Read the article

  • My View on ASP.NET Web Forms versus MVC

    - by Ricardo Peres
    Introduction A lot has been said on Web Forms and MVC, but since I was recently asked about my opinion on the subject, here it is. First, I have to say that I really like both technologies and I don’t think any is going away – just remember SharePoint, which is built on top of Web Forms. I see them as complementary, targeting different needs and leveraging different skills. Let’s go through some of their differences. Rapid Application Development Rapid Application Development (RAD) is the development process by which you have an Integrated Development Environment (IDE), a visual design surface and a toolbox, and you drag components from the toolbox to the design surface and set their properties through a property inspector. It was introduced with some of the earliest Windows graphical IDEs such as Visual Basic and Delphi. With Web Forms you have RAD out of the box. Visual Studio offers a generally good (and extensible) designer for the layout of pages and web user controls. Designing a page may simply be about dragging controls from the toolbox, setting their properties and wiring up some events to event handlers, which are implemented in code behind .NET classes. Most people will be familiar with this kind of development and enjoy it. You can see what you are doing from the beginning. MVC also has designable pages – called views in MVC terminology – the problem is that they can be built using different technologies, some of which, at the moment (MVC 4) do not support RAD – Razor, for example. I believe it is just a matter of time for that to be implemented in Visual Studio, but it will mostly consist on HTML editing, and until that day comes, you have to live with source editing. Development Model Web Forms features the same development model that you are used to from Windows Forms and other similar technologies: events fired by controls and automatic persistence of their properties between postbacks. For that, it uses concepts such as view state, which some may love and others may hate, because it may be misused quite easily, but otherwise does its job well. Another fundamental concept is data binding, by which a collection of data can be fed to a control and have it render that data somehow – just thing of the GridView control. The focus is on the page, that’s where it all starts, and you can place everything in the same code behind class: data access, business logic, layout, etc. The controls take care of generating a great part of the HTML and JavaScript for you. With MVC there is no free lunch when it comes to data persistence between requests, you have to implement it yourself. As for event handling, that is at the core of MVC, in the form of controllers and action methods, you just don’t think of them as event handlers. In MVC you need to think more in HTTP terms, so action methods such as POST and GET are relevant to you, and may write actions to handle one or the other. Also of crucial importance is model binding: the way by which MVC converts your posted data into a .NET class. This is something that ASP.NET 4.5 Web Forms has introduced as well, but it is a cornerstone in MVC. MVC also has built-in validation of these .NET classes, which out of the box uses the Data Annotations API. You have full control of the generated HTML - except for that coming from the helper methods, usually small fragments - which requires a greater familiarity with the specifications. You normally rely much more on JavaScript APIs, they are even included in the Visual Studio template, that is because much less is done for you. Reuse It is difficult to accept a professional company/project that does not employ reuse. It can save a lot of time thus cutting costs significantly. Code reused in several projects matures as time goes by and helps developers learn from past experiences. ASP.NET Web Forms was built with reuse in mind, in the form of controls. Controls encapsulate functionality and are generally portable from project to project (with the notable exception of web user controls, those with an associated .ASCX markup file). ASP.NET has dozens of controls and it is very easy to develop new ones, so I believe this is a great advantage. A control can inject JavaScript code and external references as well as generate HTML an CSS. MVC on the other hand does not use controls – it is possible to use them, with some view engines like ASPX, but it is just not advisable because it breaks the flow – where do Init, Load, PreRender, etc, fit? The most similar to controls is extension methods, or helpers. They serve the same purpose – generating HTML, CSS or JavaScript – and can be reused between different projects. What differentiates them from controls is that there is no inheritance and no context – an extension method is just a static method which doesn’t know where it is being called. You also have partial views, which you can reuse in the same project, but there is no inheritance as well. This, in my view, is a weakness of MVC. Architecture Both technologies are highly extensible. I have writtenstarted writing a series of posts on ASP.NET Web Forms extensibility and will probably write another series on MVC extensibility as well. A number of scenarios are covered in any of these models, and some extensibility points apply to both, because, of course both stand upon ASP.NET. With Web Forms, if you’re like me, you start by defining you master pages, pages and controls, with some helper classes to glue everything. You may as well throw in some JavaScript, but probably you’re main work will be with plain old .NET code. The controls you define have the chance to inject JavaScript code and references, through either the ScriptManager or the page’s ClientScript object, as well as generating HTML and CSS code. The master page and page model with code behind classes offer a number of “hooks” by which you can change the normal way of things, for example, in a page you can access any control on the master page, add script or stylesheet references to its head and even change the page’s title. Also, with Web Forms, you typically have URLs in the form “/SomePath/SomePage.aspx?SomeParameter=SomeValue”, which isn’t really SEO friendly, no to mention the HTML that some controls produce, far from standards, optimization and best practices. In MVC, you also normally start by defining the master page (or layout) and views, which are the visible parts, and then define controllers on separate files. These controllers do not know anything about the views, except the names and types of the parameters that will be passed to and from them. The controller will be responsible for the data access and business logic, eventually relying on additional classes for this purpose. On a controller you only receive parameters and return a result, which may be a request for the rendering of a view, a redirection to another URL or a JSON object, to name just a few. The controller class does not know anything about the web, so you can effectively reuse it in a non-web project. This separation and the lack of programmatic access to the UI elements, makes it very difficult to implement, for example, something like SharePoint with MVC. OK, I know about Orchard, but it isn’t really a general purpose development framework, but instead, a CMS that happens to use MVC. Not having controls render HTML for you gives you in turn much more control over it – it is your responsibility to create it, which you can either consider a blessing or a curse, in the later case, you probably shouldn’t be using MVC at all. Also MVC URLs tend to be much more SEO-oriented, if you design your controllers and actions properly. Testing In a well defined architecture, you should separate business logic, data access logic and presentation logic, because these are all different things and it might even be the need to switch one implementation for another: for example, you might design a system which includes a data access layer, a business logic layer and two presentation layers, one on top of ASP.NET and the other with WPF; and the data access layer might be implemented first using NHibernate and later on switched for Entity Framework Code First. These changes are not that rare, so care should be taken in designing the system to make them possible. Web Forms are difficult to test, because it relies on event handlers which are only fired in web contexts, when a form is submitted or a page is requested. You can call them with reflection, but you have to set up a number of mocking objects first, HttpContext.Current first coming to my mind. MVC, on the other hand, makes testing controllers a breeze, so much that it even includes a template option for generating boilerplate unit test classes up from start. A well designed – from the unit test point of view - controller will receive everything it needs to work as parameters to its action methods, so you can pass whatever values you need very easily. That doesn’t mean, of course, that everything can be tested: views, for instance, are difficult to test without actually accessing the site, but MVC offers the possibility to compile views at build time, so that, at least, you know you don’t have syntax errors beforehand. Myths Some popular but unfounded myths around MVC include: You cannot use controls in MVC: not true, actually, you can, at least with the Web Forms (ASPX) view engine; the declaration and usage is exactly the same as with Web Forms; You cannot specify a base class for a view: with the ASPX view engine you can use the Inherits Page directive, with this and all the others you can use the pageBaseType and userControlBaseType attributes of the <page> element; MVC shields you from doing “bad things” on your views: well, you can place any code on a code block, at least with the ASPX view engine (you may be starting to see a pattern here), even data access code; The model is the entity model, tied to an O/RM: the model is actually any class that you use to pass values to a view, including (but generally not recommended) an entity model; Unit tests come with no cost: unit tests generally don’t cover the UI, although there are frameworks just for that (see WatiN, for example); also, for some tests, you will have to mock or replace either the HttpContext.Current property or the HttpContextBase class yourself; Everything is testable: views aren’t, without accessing the site; MVC relies on HTML5/some_cool_new_javascript_framework: there is no relation whatsoever, MVC renders whatever you want it to render and does not require any framework to be present. The thing is, the subsequent releases of MVC happened in a time when Microsoft has become much more involved in standards, so the files and technologies included in the Visual Studio templates reflect this, and it just happens to work well with jQuery, for example. Conclusion Well, this is how I see it. Some folks may think that I am being too rude on MVC, probably because I don’t like it, but that’s not true: like I said, I do like MVC and I am starting my new projects with it. I just don’t want to go along with that those that say that MVC is much superior to Web Forms, in fact, some things you can do much more easily with Web Forms than with MVC. I will be more than happy to hear what you think on this!

    Read the article

  • Silverlight Cream for November 24, 2011 -- #1173

    - by Dave Campbell
    In this Thanksgiving Day Issue: Andrea Boschin, Samidip Basu, Ollie Riches, WindowsPhoneGeek, Sumit Dutta, Dhananjay Kumar, Daniel Egan, Doug Mair, Chris Woodruff, and Debal Saha.Happy Thanksgiving Everybody! Above the Fold: Silverlight: "Silverlight CommandBinding with Simple MVVM Toolkit" Debal Saha WP7: "How many pins can Bing Maps handle in a WP7 app - part 3" Ollie Riches Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:Windows Phone 7.5 - Play with musicAndrea Boschin's latest WP7 post is up on SilverlightShow... he's talking about the improvements in the music hub and also the programmability of musicOData caching in Windows PhoneSamidip Basu has an OData post up on SilverlightShow also, and he's talking about data caching strategies on WP7How many pins can Bing Maps handle in a WP7 app - part 3Ollie Riches has part 3 of his series on Bing Maps and pins... sepecifically how to deal with a large number of them... after going through discussing pins, he is suggesting using a heat map which looks pretty darn good, and renders fast... except when on a device :(Improvements in the LongListSelector Selection with Nov `11 release of WP ToolkitWindowsPhoneGeek's latest is this tutorial on the LongListSelector in the WP Toolkit... check out the previous info in his free eBook to get ready then dig into this tutorial for improvements in the control.Part 25 - Windows Phone 7 - Device StatusSumit Dutta's latest post is number 25 in his WP7 series, and time out he's digging into device status in the Microsoft.Phone.Info namespaceVideo on How to work with Picture in Windows Phone 7Dhananjay Kumar's latest video tutorial on WP7 is up, and he's talking about working with Photos.Live Tiles–Windows Phone WorkshopDaniel Egan has the video up of a Windows Phone Workshop done earlier this week on Live Tiles31 Days of Mango | Day #15: The Progress BarDoug Mair shares the show with Jeff Blankenburg in Jeff's Day 15 in his 31 Day quest of Mango, talking about the progressbar: Indeterminate and Determinate Modes abound31 Days of Mango | Day #14: Using ODataChris Woodruff has a guest spot on Jeff Blankenburg's 31 Days series with this post on OData... long detailed tutorial with all the codeSilverlight CommandBinding with Simple MVVM ToolkitDebal Saha has a nice detailed tutorial up on CommandBinding.. he's using the SimpleMVVM Toolkit and shows downloading and installing itStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • What is a “pretty and proper OO” way for handling sessions and authentication?

    - by asdfqwer
    Is coupling these two concepts a bad approach? As of right now I'm delegating all session handling and whether or not a user desires to logout in my config.inc file. As I was writing my Auth class I started wondering whether or not my Auth class should be taking care of most of the logic in my config.inc. Regardless, I'm sure there's a more elegant way of handling this... Here is what I have in my config.inc (also a large chunk of this code is based on a reply I found on SO except I can't find the source ._.): ini_set('session.name', 'SID'); # session management session_set_cookie_params(24*60*60); // set SID cookie lifetime session_start(); if(isset($_SESSION['LOGOUT']) { session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data #header('Location: '.DOCROOT); } elseif(isset($_SESSION['SID_AUTH'])) { // verify user has authenticated if (!isset($_SESSION['SID_CREATED'])) { $_SESSION['SID_CREATED'] = time(); } elseif (time() - $_SESSION['SID_CREATED'] > 6*60*60) { // session started more than 6 hours ago session_regenerate_id(); // reset SID value $_SESSION['SID_CREATED'] = time(); // update creation time } if (isset($_SESSION['SID_MODIFIED']) && (time() - $_SESSION['SID_MODIFIED'] > 12*60*60)) { // last request was more than 12 hours ago session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data } $_SESSION['SID_MODIFIED'] = time(); // update last activity time stamp }

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >