Search Results

Search found 15228 results on 610 pages for 'comment action'.

Page 436/610 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Auto-run script when iPad plugged in

    - by oldmankit
    The way that Ubuntu handles documents on the iPad is awesome (without any configuration required). It beats windows, even after you install iTunes. I want to have the documents in certain iPad apps automatically synced into my Dropbox directory whenever the iPad is connected by USB. The syncing is easy; getting the script to run is not. I have already read the information in various (very out-of-date) tutorials. The best I could find was here: http://askubuntu.com/a/25091/16157 I used lsusb, with the following results: Bus 002 Device 012: ID 05ac:12a2 Apple, Inc. (Please note that when an iPad is connected, Ubuntu seems to mount it to two different mount points: one for "Documents" and one for the whole iPad filesystem. They are both mounted in ~/.gvfs) I have created the following file /etc/udev/rules.d/96-ipad_sync.rules ACTION=="add", ATTRS{idVendor}=="05ac", ATTRS{idProduct}=="12a2", RUN+="/home/kit/bin/jobdone2" I want it to run a test script (which sleeps for five seconds then plays an mp3 file. The test script works, and I have typed the location correctly). So far, when I plug the iPad in, nothing happens. Yes, I waited five seconds. This is the output I get from typing udevadm monitor –env KERNEL[29348.114010] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4 (usb) KERNEL[29348.114844] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0 (usb) KERNEL[29348.129118] remove /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0 (usb) KERNEL[29348.130699] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.0 (usb) KERNEL[29348.130845] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.1 (usb) KERNEL[29348.130909] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.2 (usb) UDEV [29348.163861] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4 (usb) UDEV [29348.170390] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0 (usb) UDEV [29348.171521] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.1 (usb) UDEV [29348.172230] remove /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0 (usb) UDEV [29348.172890] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.2 (usb) UDEV [29348.175645] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.0 (usb)

    Read the article

  • Who should control navigation in an MVVM application?

    - by SonOfPirate
    Example #1: I have a view displayed in my MVVM application (let's use Silverlight for the purposes of the discussion) and I click on a button that should take me to a new page. Example #2: That same view has another button that, when clicked, should open up a details view in a child window (dialog). We know that there will be Command objects exposed by our ViewModel bound to the buttons with methods that respond to the user's click. But, what then? How do we complete the action? Even if we use a so-called NavigationService, what are we telling it? To be more specific, in a traditional View-first model (like URL-based navigation schemes such as on the web or the SL built-in navigation framework) the Command objects would have to know what View to display next. That seems to cross the line when it comes to the separation of concerns promoted by the pattern. On the other hand, if the button wasn't wired to a Command object and behaved like a hyperlink, the navigation rules could be defined in the markup. But do we want the Views to control application flow and isn't navigation just another type of business logic? (I can say yes in some cases and no in others.) To me, the utopian implementation of the MVVM pattern (and I've heard others profess this) would be to have the ViewModel wired in such a way that the application can run headless (i.e. no Views). This provides the most surface area for code-based testing and makes the Views a true skin on the application. And my ViewModel shouldn't care if it displayed in the main window, a floating panel or a child window, should it? According to this apprach, it is up to some other mechanism at runtime to 'bind' what View should be displayed for each ViewModel. But what if we want to share a View with multiple ViewModels or vice versa? So given the need to manage the View-ViewModel relationship so we know what to display when along with the need to navigate between views, including displaying child windows / dialogs, how do we truly accomplish this in the MVVM pattern?

    Read the article

  • Failed to start up after upgrading software

    - by Landy
    I asked this question in SuperUser one hour ago, then I know this community so I moved the question here... I've been running Ubuntu 10.10 in a physical x86-64 machine. Today Update Manager reminded me that there are some updates to install and I confirmed the action. I should had read the update list but I didn't. I can only remember there is an update about cups. After the upgrading, Update Manager requires a restart and I confirmed too. But after the restart, the computer can't start up. There are errors in the console. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. [xxx]usb 1-8: new high speed USB device using ehci_hcd and address 3 [xxx]usb 2-1: new full speed USB device using ohci_hcd and address 2 [xxx]hub 2-1:1.0: USB hub found [xxx]hub 2-1:1.0: 4 ports detected [xxx]usb 2-1.1: new low speed USB device using ohci_hcd and address 3 Gave up waiting for root device. Common probles: - Boot args (cat /proc/cmdline) - Check rootdelay=(did the system wait long enough) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory FATAL: Could not load /lib/modules/2.6.35-22-generic/modules.dep: No such file or directory ALERT! /dev/sda1 does not exist. Dropping to a shell! BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell(ash) Enter 'help' for a list of built-in commands. (initramfs)[cursor is here] At the moment, I can't input anything in the console. The keyboard doesn't work at all. What's wrong? How can I check boot args or "root=" as suggested? How can I fix this issue? Thanks. =============== PS1: the /dev/sda1 is type ext4 (rw,nosuid,nodev) PS2: the /dev/sda1 can be mounted and accessed successfully under SUSE 11 SP1 x64. PS3: From this link, I think the keyboard doesn't work because the USB driver is not loaded at that time.

    Read the article

  • Velocity collision detection (2D)

    - by ultifinitus
    Alright, so I have made a simple game engine (see youtube) And my current implementation of collision resolution has a slight problem, involving the velocity of a platform. Basically I run through all of the objects necessary to detect collisions on and resolve those collisions as I find them. Part of that resolution is setting the player's velocity = the platform's velocity. Which works great! Unless I have a row of platforms moving at different velocities or a platform between a stack of tiles.... (current system) bool player::handle_collisions() { collisions tcol; bool did_handle = false; bool thisObjectHandle = false; for (int temp = 0; temp < collideQueue.size(); temp++) { thisObjectHandle = false; tcol = get_collision(prevPos.x,y,get_img()->get_width(),get_img()->get_height(), collideQueue[temp]->get_position().x,collideQueue[temp]->get_position().y, collideQueue[temp]->get_img()->get_width(),collideQueue[temp]->get_img()->get_height()); if (prevPos.y >= collideQueue[temp]->get_prev_pos().y + collideQueue[temp]->get_img()->get_height()) if (tcol.top > 0) { add_pos(0,tcol.top); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); thisObjectHandle = did_handle = true; } if (prevPos.y + get_img()->get_height() <= collideQueue[temp]->get_prev_pos().y) if (tcol.bottom > 0) { add_pos(collideQueue[temp]->get_vel().x,-tcol.bottom); set_vel(get_vel().x/*collideQueue[temp]->get_vel().x*/,collideQueue[temp]->get_vel().y); ableToJump = true; jumpTimes = maxjumpable; thisObjectHandle = did_handle = true; } /// /// ADD CODE FROM NEXT CODE BLOCK HERE (on forum, not in code) /// } for (int temp = 0; temp < collideQueue.size(); temp++) { thisObjectHandle = false; tcol = get_collision(x,y,get_img()->get_width(),get_img()->get_height(), collideQueue[temp]->get_position().x,collideQueue[temp]->get_position().y, collideQueue[temp]->get_img()->get_width(),collideQueue[temp]->get_img()->get_height()); if (prevPos.x + get_img()->get_width() <= collideQueue[temp]->get_prev_pos().x) if (tcol.left > 0) { add_pos(-tcol.left,0); set_vel(collideQueue[temp]->get_vel().x,get_vel().y); thisObjectHandle = did_handle = true; } if (prevPos.x >= collideQueue[temp]->get_prev_pos().x + collideQueue[temp]->get_img()->get_width()) if (tcol.right > 0) { add_pos(tcol.right,0); set_vel(collideQueue[temp]->get_vel().x,get_vel().y); thisObjectHandle = did_handle = true; } } return did_handle; } (if I add the following code {where the comment to do so is}, which is glitchy, the above problem doesn't happen, though it brings others) if (!thisObjectHandle) { if (tcol.bottom > tcol.top) { add_pos(collideQueue[temp]->get_vel().x,-tcol.bottom); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); } else if (tcol.top > tcol.bottom) { add_pos(0,tcol.top); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); } } How would you change my system to prevent this?

    Read the article

  • Actor and Sprite, who should own these properties?

    - by Gerardo Marset
    I'm writing sort of a 2D game engine for making the process of creating games easier. It has two classes, Actor and Sprite. Actor is used for interactive elements (the player, enemies, bullets, a menu, an invisible instance that controls score, etc) and Sprite is used for animated (or not) images with transparency (or not). The actor may have an assigned sprite that represents it on the screen, which may change during the game. E.g. in a top-down action game you may have an actor with a sprite of a little guy that changes when attacking, walking, and facing different directions, etc. Currently the actor has x and y properties (its coordinates in the screen), while the sprite has an index property (the number of the frame currently being shown by the sprite). Since the sprite doesn't know which actor it belongs to (or if it belongs to an actor at all), the actor must pass its x and y coordinates when drawing the sprite. Also, since a actors may reset its sprite each frame (and usually do), the sprite's index property must be passed from the old to the new sprite like so (pseudocode): function change_sprite(new_sprite) old_index = my.sprite.index my.sprite = new_sprite() my.sprite.index = old_index % my.sprite.frames end I always thought this was kind of cumbersome, but it never was a big problem. Now I decided to add support for more properties. Namely a property to draw the sprite rotated, a property to draw it flipped, it a property draw it stretched, etc. These should probably belong to the sprite and not the actor, but if they do, the actor would have to pass them from the old to the new sprite each time it changes... On the other hand, if they belonged to the actor, the actor would have to pass each property to the sprite when drawing it (since the sprite doesn't know which actor it belongs to, and it shouldn't, since sprites aren't just meant to be used by actors, really). Another option I thought of would be having an extra class that owns all these properties (plus index, x and y) and links an actor with a sprite, but that doesn't come without drawbacks. So, what should I do with all these properties? Thanks!

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Converting .docx to pdf (or .doc to pdf, or .doc to odt, etc.) with libreoffice on a webserver on the fly using php

    - by robertphyatt
    Ok, so I needed to convert .docx files to .pdf files on the fly, but none of the free php libraries that were available let me do it on my server (a webservice was not good enough). Basically either I needed to pay for a library (and have it maybe suck) or just deal with the free ones that didn't convert the formatting well enough. Not good enough! I found that LibreOffice (OpenOffice's successor) allows command line conversion using the LibreOffice conversion engine (which DID preserve the formatting like I wanted and generally worked great). I loaded the latest version of Ubuntu (http://www.ubuntu.com/download/ubuntu/download) onto my Virtual Box (https://www.virtualbox.org/wiki/Downloads) on my computer and found that I was able to easily convert files using the commandline like this: libreoffice --headless -convert-to pdf fileToConvert.docx -outdir output/path/for/pdf I thought: sweet...but I don't have admin rights on my host's web server. I tried to use a "portable" version of LibreOffice that I obtained from http://portablelinuxapps.org/ but I was unable to get it to work on my host's webserver, because my host's webserver didn't have all the dependencies (Dependency Hell! http://en.wikipedia.org/wiki/Dependency_hell) I was at a loss of how to make it work, until I ran across a cool project made by a Ph.D. student (Philip J. Guo) at Stanford called CDE: http://www.stanford.edu/~pgbovine/cde.html I will let you look at his explanations of how it works (I followed what he did in http://www.youtube.com/watch?feature=player_embedded&v=6XdwHo1BWwY, starting at about 32:00 as well as the directions on his site), but in short, it allows one to avoid dependency hell by copying all the files used when you run certain commands, recreating the linux environment where the command worked. I was able to use this to run LibreOffice without having to resort to someone's portable version of it, and it worked just like it did when I did it on Ubuntu with the command above, with a tweak: I needed to run the wrapper of LibreOffice the CDE generated. So, below is my PHP code that calls it. In this code snippet, the filename to be copied is passed in as $_POST["filename"]. I copy the file to the same spot where I originally converted the file, convert it, copy it back and then delete all the files (so that it doesn't start growing exponentially). I did it this way because I wasn't able to make it work otherwise on the webserver. If there is a linux + webserver ninja out there that can figure out how to make it work without doing this, I would be interested to know what you did. Please post a comment or something if you did that. <?php //first copy the file to the magic place where we can convert it to a pdf on the fly copy($time.$_POST["filename"], "../LibreOffice/cde-package/cde-root/home/robert/Desktop/".$_POST["filename"]); //change to that directory chdir('../LibreOffice/cde-package/cde-root/home/robert'); //the magic command that does the conversion $myCommand = "./libreoffice.cde --headless -convert-to pdf Desktop/".$_POST["filename"]." -outdir Desktop/"; exec ($myCommand); //copy the file back copy("Desktop/".str_replace(".docx", ".pdf", $_POST["filename"]), "../../../../../documents/".str_replace(".docx", ".pdf", $_POST["filename"])); //delete all the files out of the magic place where we can convert it to a pdf on the fly $files1 = scandir('Desktop'); //my files that I generated all happened to start with a number. $pattern = '/^[0-9]/'; foreach ($files1 as $value) { preg_match($pattern, $value, $matches); if(count($matches) ?> 0) { unlink("Desktop/".$value); } } //changing the header to the location of the file makes it work well on androids header( 'Location: '.str_replace(".docx", ".pdf", $_POST["filename"]) ); ?> And here is the tar.gz file I generated I generated with CDE. To duplicate what I did exactly, put the tar.gz file in a folder somewhere. I will call that folder the "root". Make a new folder called "documents" in the "root" folder. Unpack the tar.gz and run the php script above from the "documents" folder. Success! I made a truly portable version of LibreOffice that can convert files on the fly on a webserver using 100% free, open source software!

    Read the article

  • Contract Lifecycle Management for Public Sector

    - by jeffrey.waterman
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One thing Oracle never seems to get enough credit for is its consistent quest to improve its products, even the ones as established as its back-office solutions. Here is another example of one of the latest improvements: Contract Lifecycle Management for Public Sector, or CLM. The latest EBS module geared specifically for the Federal acquisition community. Our existing customers have been asking Oracle for years to upgrade its Advanced Procurement Suite to meet the complex procurement processes of the Federal Government. You asked; we listened. Oracle, with direct input from Federal agencies, subject matter experts, integration partners, and the Federal acquisition community, has expanded and deepened its procurement suite to meet the unique demands of the Federal acquisition community. New benefits/features include: Contract Line Item/Sub-Line Item (CLIN/SLIN) structures Configurable Document Numbering Complex Pricing Contract Types ( as per FAR Part 16) Option lines and exercising of options Incremental Funding capability Support for multiple document types (delivery orders, BPA call orders, awards, agreements, IDIQ contracts) Requisition lines to fund modifications Workload assignment and milestones Contract Action Reporting to FPDS-NG I’ve been conducting many tests over the past few months and have been quite impressed with the depth of features and the seamless integration to Federal Financials, specifically the funds control within the financials. Again, thank you for reading. If you have suggestions for future posts, please leave them in the comments section and I’ll take it from there.

    Read the article

  • Java Embedded Releases

    - by Tori Wieldt
    Oracle today announced a new product in its Java Platform, Micro Edition (Java ME) product portfolio, Oracle Java ME Embedded 3.2, a complete client Java runtime Optimized for resource-constrained, connected, embedded systems.  Also, Oracle is releasing Oracle Java Wireless Client 3.2, Oracle Java ME Software Development Kit (SDK) 3.2. Oracle also announced Oracle Java Embedded Suite 7.0 for larger embedded devices, providing a middleware stack for embedded systems. Small is the new big! Introducing Oracle Java ME Embedded 3.2  Oracle Java ME Embedded 3.2 is designed and optimized to meet the unique requirements of small embedded, low power devices such as micro-controllers and other resource-constrained hardware without screens or user interfaces. These include: On-the-fly application downloads and updates Remote operation, often in challenging environments Ability to add new capabilities without impacting the existing functions Support for hardware with as little as 130 kB RAM and 350 kB ROM Oracle Java Wireless Client 3.2 Oracle Java Wireless Client 3.2 is built around an optimized Java ME implementation that delivers a feature-rich application environment for mass-market mobile devices. This new release: Leverages standard JSRs, Oracle optimizations/APIs and a flexible porting layer for device specific customizations, which are tuned to device/chipset requirements Supports advanced tooling functions, such as memory and network monitoring and on-device tooling Offers new support for dual SIM functionality, which is highly useful for mass-market devices supported by multiple carriers with multiple phone connections Oracle Java ME SDK 3.2 Oracle Java ME SDK 3.2 provides a complete development environment for both Oracle Java ME Embedded 3.2 and Oracle Java Wireless Client 3.2. Available for download from OTN, The latest version includes: Small embedded device support In-field and remote administration and debugging Java ME SDK plug-ins for Eclipse and the NetBeans Integrated Development Environment (IDE), enabling more application development environments for Java ME developers. A new device skin creator that developers can use to generate their custom device skins for testing their applications. Oracle Embedded Suite 7.0 The Oracle Java Embedded Suite is a new packaged solution from Oracle (including Java DB, GlassFish for Embedded Suite, Jersey Web Services Framework, and Oracle Java SE Embedded 7 platform), created to provide value added services for collecting, managing, and transmitting data to and from other embedded devices.The Oracle Java Embedded Suite is a complete device-to-data center solution subset for embedded systems.  See Java Me and Java Embedded in Action Java ME and Java Embedded technologies will be showcased for developers at JavaOne 2012 in over 60 conference sessions and BOFs, as well as in the JavaOne Exhibition Hall. For business decision makers, the new event Java Embedded @ JavaOne you learn more about Java Embedded technologies and solutions.

    Read the article

  • Memories about Tadeusz Golonka

    - by Damian
    Today at 10:55 AM, Tadeusz Golonka - my greatest  Mentor and Teacher  passed away. I had te opportunity to met Tadek in person several times last years. It was always a great experience to see how he shared his energy and passion. I was always impressed and had a lot of new ideas after such meeting or lecture. I can remember the meeting  in early 2009 and his briliant speech he did for us, the MVP community in Poland. We spent two days together and he talked to us all the time. He gave us examples how to share IT passion to other people and how to be better person for others. He was the greates Mentor I have ever met - I realized this during that meeting. My greates dream was and still is to be "like Tadek". Many Times I just went to events to see / hear him on stage ("in action"). I always wanted to have his energy, empathy and passion. Now I have to live without his good words and advices....Let me put here the words that Adam Cogan wrote on Tadek's profile on Facebook. I just can't write about that fatal accident. "The circumstances of Tadeusz Golonka death are too tragic. Tad stood up to offer his seat to an elderly lady, he lost his balance and then he slipped and hit the tram door hard. He then fell out of the tram and hit the metal barriers that separate the tram rails from the street. It was a severe accident...... So horrible.  At first it was a miracle is that he survived... he fought for several days.  My thoughts are with his lovely family. The family have asked for blood donations as a symbolic gift. Tad received a lot of blood.  Thank you Tad, you were a wonderful person. I will remember you as a kind man, a gentleman. "RIP Tadeusz- You will never ever be forgotten. You are with us all the time  

    Read the article

  • Cron job running successfully suddenly reports script is not found

    - by Ted B
    What might cause cron to suddenly report a file it is supposed to run is "not found," when the file hasn't been touched, and in fact, the entire system hasn't been touched since it last ran successfully? I have a cronjob schedule I define by doing sudo crontab -e In it, I have dozens of cron jobs that run successfully.. I do not have a PATH specified, and I use absolute paths to call all my scheduled scripts, setting the PATH in them as needed. I do not specify a SHELL in the crontab. All scripts identify the shell as their first line. Without me touching the system, a particular job defined in the middle of other jobs will suddenly stop running. To debug this, I added an output redirection to a log file. In that, the output clearly shows the output of the script successfully running time after time for weeks, and then suddenly the following appears: /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found If I do the ls command, copying and pasting that exact path from the error message, it clearly reports the file is still there (no surprise). Yet the log continues to report that file is "not found" until I take action. I can run the script manually and it runs just fine. If I do sudo crontab -e and save the file, the job runs on the next scheduled time, putting its output in the log, no longer reporting the script is "not found". It seems to me the contents of the script trying to be run are irrelevant since cron doesn't even process the file because it is "not found". I have a job scheduled below the one that is encountering this problem that I know is continuing to run, because I have its output mailed to me. So I know cron is running and continues to run at least one other job, even after it suddenly reports this job's script is "not found". All my lines end with a newline. I had no periods in the crontab until I added the redirection to a log file. I have now added a PATH specification, but left the absolute paths in the jobs. Unfortunately, I have no idea if and when this problem will occur. It will likely be weeks from now. By the way, I am running a script to syncronize the clock, and I see the time is exactly what it should be.

    Read the article

  • My shiny new gadget

    - by TechTwaddle
    About 3 months ago when I had tweeted (or twit?) that the HD7 could be my next phone I wasn’t a 100 percent sure, and when the HTC Mozart came out it was switch at first sight. I wanted to buy the Mozart mainly for three reasons; its unibody construction, smaller screen and the SLCD display. But now, holding a HD7 in my hand, I reminisce and think about how fate had its own plan. Too dramatic for a piece of gadget? Well, sort of, but seriously, this has been most exciting. So in short, I bought myself a HTC HD7 and am really loving it so far. Here are some pics (taken from my HD2 which now lies in a corner, crying),     Most of my day was spent setting up the device. Email accounts, Facebook, Marketplace etc. Since marketplace isn’t officially launched in India yet, my primary live id did not work. Whenever I tried launching marketplace it would say ‘marketplace is not currently supported in your country’. Searching the forums I found an easy work around. Just create a dummy live id with the country set to UK or US and log in to the device using this id. I was worried if the contacts and feeds from my primary live account would not be updated but that was not a problem. Adding another live account into the device does import your contacts, calendar and feeds from it. And that’s it, marketplace now works perfectly. I installed a few trial and free applications; haven’t checked if I can purchase apps though, will check that later and update this post. There is one issue I am still facing with the device, I can’t access the internet over GPRS. Windows Phone 7 only gives you the option to add an ‘APN’ and nothing else. Checking the connection settings on my HD2, I found out that there is also a proxy server I need to add to access GPRS, but so far I haven’t found a way to do that on WP7. Ideally HTC should have taken care of this, detect the operator and apply that operators settings on the device, but looks like that’s not happening. I also tried the ‘Connection Settings’ application that HTC bundled with the device, but it did nothing magical. If you’re reading this and know how to fix this problem please leave a comment. The next thing I did is install apps, a lot of apps. Read Engadget’s guide to essential apps for WP7. The apps and games I installed so far include Beezz (twitter app with push notifications), twitter (the official twitter app), Facebook, Youtube, NFS Undercover, Rocket Riot, Krashlander, Unite and the list goes on. All the apps run super smooth. The display looks fine indoors but I know it’s going to suck in bright sunlight. Anyhow, I am really impressed with what I’ve seen so far. I leave you with a few more photos. Have a great year ahead. Ciao!

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Send an E-mail Quickly with the GmailThis! Bookmarklet

    - by Asian Angel
    Sometimes you need to send out a quick e-mail for a project that you are working on, something really important that you just remembered, or perhaps just a note to yourself. If you use Gmail and like keeping things simple then join us as we look at the GmailThis! Bookmarklet. GmailThis! in Action To get set up all that you need to do is visit the webpage (link provided below) and drag the bookmarklet to your “Bookmarks Toolbar”. For our example we decided to go with the “personal note” approach. As you can see here we selected/highlighted a portion of the text and then clicked on our new bookmarklet. The bookmarklet will automatically copy and paste the name of the webpage, the URL, and any text that you selected/highlighted into the new e-mail. A nice feature that we liked was that it opened in a new temporary window to help focus on composing our letter. This is what you will see when you have finished your letter and clicked “Send”. The window will automatically close itself after a few seconds so that you do not even have to worry with it afterwards. Looking at our “Inbox” there is our new e-mail looking oh so nice. Conclusion If you need to send out a quick e-mail using your Gmail account then this bookmarklet makes it as quick and simple as possible. This is definitely one to add to your bookmarklets collection. Links Get the GmailThis! Bookmarklet for your browser Similar Articles Productive Geek Tips How to Send and Receive Hotmail from Your Gmail AccountShare Your Favorite Webpages with the AddThis BookmarkletPower Up and Manage Your Windows Send To Menu with Send To ToysTurn off New Mail Notification for PocoMail Junk Mail FolderCreate an Email Template in Outlook 2003 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Moving from Silverlight 4 Beta to RC - Part 1

    The other day I had finished up my Task-It Webinar, written a few blog posts, and knew the time had come to move from my Silverlight 4 Beta environment up to the latest RC (release candidate) bits that were released last Monday. What disappointed me when I went to the Silverlight 4 Information Page is that it told me what to install, but not what to uninstall first. Uninstalling I'm not entirely sure if I had to uninstall anything, or if installing the new stuff would just work, but in poking around the web I found posts stating that you must uninstall the following items first. Unfortunately I'm going by memory here and have not been able to find my way back to the magic page I found in the myriad of posts that I went through: Microsoft VisualStudio Beta 2 Microsoft .NET Framework 4 Extended (apparently this must be done *before* the next one) Microsoft .NET Framework Client Profile Microsoft Silverlight 4 Tools for Visual Studio 2010 WCF RIA Services Preview for Visual Studio 2010 While I was at it, I removed a bunch of other stuff, like Blend 3, Blend 4, the SDK's associated with them, and a bunch other stuff that was old. Of course, I didn't really want/need to keep any Silverlight 3 stuff around as I am developing Task-It in Silverlight 4. If I need a Silverlight 3 environment at some point I'll set it up in a virtual environment. NOTE: One thing that I did not uninstall is the Microsoft Silverlight 4 Toolkit November 2009. The reason is that they have not released the March version yet, so if you uninstall this, youll end up having to reinstall it. Installing OK, now that I had all of that old stuff off my machine, now it was time to get the new stuff. For this part I liked Tim Heuer's post, A Guide to What has Changed in Silverlight 4 RC better than the Silverlight 4 Information Page. VisualStudio 2010 RC - I installed Ultimate, but you may not need that version. No harm, it's free for now anyway. I downloaded the .exe and the 3 .rar files, then ran the .exe. I then extracted the contents of the ISO (using WinZip) to a new directory, and now had Setup.exe, a bunch of .cab files and some other assorted stuff. I simply ran Setup.exe and chose custom install (only because I wanted to uncheck Visual C++...I don't really have a need for that) Silverlight 4 Tools for Visual Studio 2010 - As mentioned in Tim's blog post, this installs this installs Silverlight developer runtime, SDK, tools, and this installs Silverlight developer runtime, SDK, tools, and WCF RIA Services. WCF RIA Services Toolkit March 2010 - I'm not sure if/when I'll need any of this stuff, but no harm in installing it anyway. Expression Blend 4 beta - Only if you plan to use Blend, which I do. Windows Phone Developer Tools - Only if you are interested in playing with Windows Phone 7 development. Wrap Up Hopefully I got those steps right. If anyone finds anything I've missed, please just add a comment to this post and I'll update it accordingly.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Access Offline or Overloaded Webpages in Firefox

    - by Asian Angel
    What do you do when you really want to access a webpage only to find that it is either offline or overloaded from too much traffic? You can get access to the most recent cached version using the Resurrect Pages extension for Firefox. The Problem If you have ever encountered a website that has become overloaded and unavailable due to sudden popularity (i.e. Slashdot, Digg, etc.) then this is the result. No satisfaction to be had here… Resurrect Pages in Action Once you have installed the extension you can add the Toolbar Button if desired…it will give you the easiest access to Resurrect Pages. Or you can wait for a problem to occur when trying to access a particular website and have it appear as shown here. As you can see there is a very nice selection of cache services to choose from, therefore increasing your odds of accessing a copy of that webpage. If you would prefer to have the access attempt open in a new tab or window then you should definitely use the Toolbar Button. Clicking on the Toolbar Button will give you access to the popup window shown here…otherwise the access attempt will happen in the current tab. Here is the result for the website that we wanted to view using the Google Listing. Followed by the Google (text only) Listing. The results with the different services will depend on how recently the webpage was published/set up. View Older Versions of Currently Accessible Websites Just for fun we decided to try the extension out on the How-To Geek website to view an older version of the homepage. Using the Toolbar Button and clicking on The Internet Archive brought up the following page…we decided to try the Nov. 28, 2006 listing. As you can see things have really changed between 2006 and now…Resurrect Pages can be very useful for anyone who is interested in how websites across the web have grown and changed over the years. Conclusion If you encounter a webpage that is offline or overloaded by sudden popularity then the Resurrect Pages extension can help you get access to the information that you need using a cached version. Links Download the Resurrect Pages extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Remove Colors and Background Images in WebpagesGet Last Accessed File Time In Ubuntu LinuxCustomize the Reading Format for Webpages in FirefoxGet Access to 100+ URL Shortening Services in FirefoxAccess Cached Versions of Webpages When a Website is Down TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam

    Read the article

  • Sortable & Filterable PrimeFaces DataTable

    - by Geertjan
    <h:form> <p:dataTable value="#{resultManagedBean.customers}" var="customer"> <p:column id="nameHeader" filterBy="#{customer.name}" sortBy="#{customer.name}"> <f:facet name="header"> <h:outputText value="Name" /> </f:facet> <h:outputText value="#{customer.name}" /> </p:column> <p:column id="cityHeader" filterBy="#{customer.city}" sortBy="#{customer.city}"> <f:facet name="header"> <h:outputText value="City" /> </f:facet> <h:outputText value="#{customer.city}" /> </p:column> </p:dataTable> </h:form> That gives me this: And here's the filter in action: Behind this, I have: import com.mycompany.mavenproject3.entities.Customer; import java.io.Serializable; import java.util.List; import javax.annotation.PostConstruct; import javax.ejb.EJB; import javax.faces.bean.RequestScoped; import javax.inject.Named; @Named(value = "resultManagedBean") @RequestScoped public class ResultManagedBean implements Serializable { @EJB private CustomerSessionBean customerSessionBean; public ResultManagedBean() { } private List<Customer> customers; @PostConstruct public void init(){ customers = customerSessionBean.getCustomers(); } public List<Customer> getCustomers() { return customers; } public void setCustomers(List<Customer> customers) { this.customers = customers; } } And the above refers to the EJB below, which is a standard EJB that I create in all my Java EE 6 demos: import com.mycompany.mavenproject3.entities.Customer; import java.io.Serializable; import java.util.List; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; @Stateless public class CustomerSessionBean implements Serializable{ @PersistenceContext EntityManager em; public List getCustomers() { return em.createNamedQuery("Customer.findAll").getResultList(); } } Only problem is that the columns are only sortable after the first time I use the filter.

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • Data-Driven SOA with Oracle Data Integrator

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif"; mso-fareast-font-family:"MS Mincho";} By Mike Eisterer, Data integration is more than simply moving data in bulk or in real-time, it is also about unifying information for improved business agility and integrating it in today’s service-oriented architectures. SOA enables organizations to easily define services which may then be discovered and leveraged by varying consumers. These consumers may be applications, customer facing portals, or complex business rules which are assembling services to automate process. Data as a foundational service provider is a key component of today’s successful SOA implementations. Oracle offers the broadest and most integrated portfolio of products to help you define, organize, orchestrate and consume data services. If you are attending Oracle OpenWorld next week, you will have ample opportunity to see the latest Oracle Data Integrator live in action and work with it yourself in two offered Hands-on Labs. Visit the hands-on lab to gain experience firsthand: Oracle Data Integrator and Oracle SOA Suite: Hands-on- Lab (HOL10480) Wed Oct 3rd 11:45AM Marriott Marquis- Salon 1/2 To learn more about Oracle Data Integrator, please visit our Introduction Hands-on LAB: Introduction to Oracle Data Integrator (HOL10481) Mon Oct 1st 3:15PM, Marriott Marquis- Salon 1/2 If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

    Read the article

  • Cant finish upgrade from 11.10 to 12 on VPS based on Parallels Virtuozzo Containers, due to libc6

    - by Carmageddon
    I was stuck with this problem near the end of an upgrade: WARNING: this version of the GNU libc requires kernel version 2.6.24 or later. Please upgrade your kernel before installing glibc. The installation of a 2.6 kernel could ask you to install a new libc first, this is NOT a bug, and should NOT be reported. In that case, please add lenny sources to your /etc/apt/sources.list and run: apt-get install -t lenny linux-image-2.6 Their suggested stepds dont work on VPS, and after googling, I came up to this: Why did my upgrade to 12.04 fail with "glibc not found" or "libc6" or "requires kernel 2.6.24" error? There is comment by izx which explains my problem and proposes a workaround (might take a while to convince the guys to upgrade the kernel..). However, when I follow his instructions, I get error: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libc-dev-bin libc6 libc6-dev libnih1 Suggested packages: glibc-doc The following packages will be upgraded: libc-dev-bin libc6 libc6-dev libnih1 4 upgraded, 0 newly installed, 0 to remove and 394 not upgraded. 1 not fully installed or removed. Need to get 0 B/7737 kB of archives. After this operation, 233 kB disk space will be freed. Do you want to continue [Y/n]? y locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Preconfiguring packages ... (Reading database ... 35175 files and directories currently installed.) Preparing to replace libc6-dev 2.13-20ubuntu5.2 (using .../libc6-dev_2.15-0ubuntu10.3_amd64.deb) ... Unpacking replacement libc6-dev ... Preparing to replace libc-dev-bin 2.13-20ubuntu5.2 (using .../libc-dev-bin_2.15-0ubuntu10.3_amd64.deb) ... Unpacking replacement libc-dev-bin ... Preparing to replace libc6 2.13-20ubuntu5.2 (using .../libc6_2.15-0ubuntu10.3_amd64.deb) ... locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Checking for services that may need to be restarted... Checking init scripts... runlevel:/var/run/utmp: No such file or directory Checking for services that may need to be restarted... Checking init scripts... runlevel:/var/run/utmp: No such file or directory WARNING: init script for samba not found. Stopping some services possibly affected by the upgrade (will be restarted later): cron: stopping...done. WARNING: this version of the GNU libc requires kernel version 2.6.24 or later. Please upgrade your kernel before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add lenny sources to your /etc/apt/sources.list and run: apt-get install -t lenny linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Processing triggers for man-db ... locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by locale) Errors were encountered while processing: /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I also attempted to manually grab the .deb package and install it using dpkg -i, but getting: locale: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by locale) Even though the file is: libc-bin_2.15-0ubuntu10+openvz0_amd64.deb

    Read the article

  • Help with Strategy-game AI

    - by f20k
    Hi, I am developing a strategy-game AI (think: Final Fantasy Tactics), and I am having trouble coming up for the design of the AI. My main problem is determining which is the optimal thing for it to do. First let me describe the priority of what action I would like the AI to take: Kill nearest player unit Fulfill primary directive (kill all player units, kill target unit, survive for x turns) Heal ally unit / cast buffer Now the AI can do the following in its turn: Move - {Attack / Ability / Item} (either attack or ability or item) {Attack / Ability / Item} - Move Move closer (if targets not in range) {Attack / Ability / Item} (if move not available) Notes Abilities have various ranges / effects / costs / effects. Each ai unit has maybe 5-10 abilities to choose from. The AI will prioritize killing over safety unless its directive is to survive for x turns. It also doesn't care about ability cost much. While a player may want to save a big spell for later, the AI will most likely use it asap. Movement is on a (hex) grid num of player units: 3-6 num of ai units: 3-7 or more. Probably max 10. AI and player take turns controlling ONE unit, instead of all at the same time. Platform is Android (if program doesnt respond after some time, there will be a popup saying to Force Quit or Wait - which looks really bad!). Now comes the questions: The best ability to use would obviously be the one that hits the most targets for the most damage. But since each ability has different ranges, I won't know if they are in range without exploring each possible place I can move to. One solution would be to go through each possible places to move to, determine the optimal attack at that location - which gives me a list of optimal moves for each location. Then choose the optimal out of the list and execute it. But this will take a lot of CPU time. Is there a better solution? My current idea is to move as close as possible towards the closest, largest group of people, and determine the optimal attack/ability from there. I think this would be a lot less work for the CPU and still allow for wide-range attacks. Its sub-optimal but the AI will still seem 'smart'. Other notes/questions: Am I over-thinking/over-complicating it? Better solution? I am open to all sorts of suggestions I have taken a look at the spell-casting question, but it doesn't take into account the movement - so perhaps use that algo for each possible move location? The top answer mentioned it wasn't great for area-of-effect and group fights - so maybe requires more tweaking? Please, if you mention a graph/tree, let me know basically how to use it. E.g. Node means ability, level corresponds to damage, then search for the deepest node.

    Read the article

  • PowerShell and SMO – be careful how you iterate

    - by Fatherjack
    I’ve yet to have a totally smooth experience with PowerShell and it was late on Friday when I crashed into this problem. I haven’t investigated if this is a generally well understood circumstance and if it is then I apologise for repeating everything. Scenario: I wanted to scan a number of server for many properties, including existing logins and to identify which accounts are bestowed with sysadmin privileges. A great task to pass to PowerShell, so with a heavy heart I started up PowerShellISE and started typing. The script doesn’t come easily to me but I follow the logic of SMO and the properties and methods available with the language so it seemed something I should be able to master. Version #1 of my script. And the results it returns when executed against my home laptop server. These results looked good and for a long time I was concerned with other parts of the script, for all intents and purposes quite happy that this was an accurate assessment of the server. Let’s just review my logic for each step of the code at the top. Lines 1 to 7 just set up our variables and write out the header message Line 8 our first loop, to go through each login on the server Line 10 an inner loop that will assess each role name that each login has been assigned Line 11 a test to see if each role has the name ‘sysadmin’ Line 13 write out the login name with a bright format as it is a sysadmin login Line 17 write out the login name with no formatting It is quite possible that here someone with more PowerShell experience than me will be shouting at their screen pointing at the error I made but to me this made total sense. Until I altered the code, I altered lines 6 and 7 of code above to be: $c = $Svr.Logins.Count write-host “There are $c Logins on the server” This changed my output to look like this: This started alarm bells ringing – there are clearly not 13 logins listed So, let’s see where things are going wrong, edit the script so it looks like this. I’ve highlighted the changes to make Running this code shows me these results Our $n variable should count up by one for each login returned and We are clearly missing some logins. I referenced this list back to Management Studio for my server and see the Logins as below, where there are clearly 13 logins. We see a Login called Annette in SSMS but not in the script results so I opened that up and looked at its properties and it’s server roles in particular. The account has only public access to the server. Inspection of the other logins that the PowerShell script misses out show they too are only members of the public role. Right now I can’t work out whether there is a good reason for this and if it should be expected behaviour or not. Please spend a few minutes to leave a comment if you have an opinion or theory for this. How to get the full list of logins. Clearly I needed to get a full list of the logins so set about reviewing my code to see if there was a better way to iterate through the roles for each login. This is the code that I came up with and I think it is doing everything that I need it to. It gives me the expected results like this: So it seems that the ListMembers() method is the trouble maker in my first versions of the code. I would have expected that ListMembers should return Logins that are only members of the public role, certainly Technet makes no reference to it being left out in it’s Login.ListMembers details. Suffice to say, it’s a lesson learned and I will approach using it with caution in future circumstances.

    Read the article

  • Minimum team development sizes

    - by MarkPearl
    Disclaimer - these are observations that I have had, I am not sure if this follows the philosophy of scrum, agile or whatever, but most of these insights were gained while implementing a scrum scenario. Two is a partnership, three starts a team For a while I thought that a team was anything more than one and that scrum could be effective methodology with even two people. I have recently adjusted my thinking to a scrum team being a minimum of three, so what happened to two and what do you call it? For me I consider a group of two people working together a partnership - there is value in having a partnership, but some of the dynamics and value that you get from having a team is lost with a partnership. Avoidance of a one on one confrontation The first dynamic I see missing in a partnership is the team motivation to do better and how this is delivered to individuals that are not performing. Take two highly motivated individuals and put them together and you will typically see them continue to perform. Now take a situation where you have two individuals, one performing and one not and the behaviour is totally different compared to a team of three or more individuals. With two people, if one feels the other is not performing it becomes a one on one confrontation. Most people avoid confrontations and so nothing changes. Compare this to a situation where you have three people in a team, 2 performing and 1 not the dynamic is totally different, it is no longer a personal one on one confrontation but a team concern and people seem more willing to encourage the individual not performing and express their dissatisfaction as a team if they do not improve. Avoiding the effects of Tuckman’s Group Development Theory If you are not familiar with Tuckman’s group development theory give it a read (http://en.wikipedia.org/wiki/Tuckman's_stages_of_group_development) In a nutshell with Tuckman’s theory teams go through these stages of Forming, Storming, Norming & Performing. You want your team to reach and remain in the Performing stage for as long as possible - this is where you get the most value. When you have a partnership of two and you change the individuals in the partnership you basically do a hard reset on the partnership and go back to the beginning of Tuckman’s model each time. This has a major effect on the performance of a team and what they can deliver. What I have seen is that you reduce the effects of Tuckman's theory the more individuals you have in the team (until you hit the maximum team size in which other problems kick in). While you will still experience Tuckman's theory with a team of three, the impact will be greatly reduced compared to two where it is guaranteed every time a change occurs. It's not just in the numbers, it's in the people One final comment - while the actual numbers of a team do play a role, the individuals in the team are even more important - ideally you want to keep individuals working together for an extended period. That doesn't mean that you never change the individuals in a team, or that once someone joins a team they are stuck there - there is value in an individual moving from team to team and getting cross pollination, but the period of time that an individual moves should be in month's or years, not days or weeks. Why? So why is it important to know this? Why is it important to know how a team works and what motivates them? I have been asking myself this question for a while and where I am at right now is this… the aim is to achieve the stage where the sum of the total (team) is greater than the sum of the parts (team members). This is why we form teams and why understanding how they work is a challenge and also extremely stimulating.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >