Search Results

Search found 95644 results on 3826 pages for 'one zero'.

Page 20/3826 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server)

    - by rahul286
    After debugging for 6-hours - I am giving this up :| We have a nginx+php-fpm+mysql in LAN with almost 100 wordpress (created and used by different designers/developers all working on test wordpres setup) We are using nginx without any issues from long. Today, all of a sudden - nginx started returning "504 Gateway Time-out" out of the blue... I checked nginx error log for a virtual host... 2010/09/06 21:24:24 [error] 12909#0: *349 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *349 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *443 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:12 [error] 12909#0: *443 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:08:32 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:33 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:44 [error] 12909#0: *1313 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:53 [error] 12909#0: *1313 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" As I run php-fpm on port 9000 via TCP mode, I ran "netstat | grep 9000" and noticed something unusual... (Pasting partial output here for ease of read) tcp 9 0 localhost:9000 localhost:36094 CLOSE_WAIT 14269/php5-fpm tcp 0 0 localhost:46664 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36135 CLOSE_WAIT - tcp 1257 0 localhost:9000 localhost:36125 CLOSE_WAIT - tcp 9 0 localhost:9000 localhost:36102 CLOSE_WAIT 14268/php5-fpm tcp 0 0 localhost:46662 localhost:9000 FIN_WAIT2 - tcp 745 0 localhost:9000 localhost:46644 CLOSE_WAIT - tcp 0 0 localhost:46658 localhost:9000 FIN_WAIT2 - tcp 1265 0 localhost:9000 localhost:46607 CLOSE_WAIT - tcp 0 0 localhost:46672 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1257 0 localhost:9000 localhost:36119 CLOSE_WAIT - tcp 1265 0 localhost:9000 localhost:46613 CLOSE_WAIT - tcp 0 0 localhost:46646 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36137 CLOSE_WAIT - tcp 0 0 localhost:46670 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1265 0 localhost:9000 localhost:46619 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46668 ESTABLISHED - tcp 0 0 localhost:46648 localhost:9000 FIN_WAIT2 - tcp 1336 0 localhost:9000 localhost:46670 ESTABLISHED - tcp 9 0 localhost:9000 localhost:36108 CLOSE_WAIT 14274/php5-fpm tcp 1336 0 localhost:9000 localhost:46684 ESTABLISHED - tcp 0 0 localhost:46674 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1336 0 localhost:9000 localhost:46666 ESTABLISHED - tcp 1257 0 localhost:9000 localhost:46648 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46678 ESTABLISHED - tcp 0 0 localhost:46668 localhost:9000 ESTABLISHED 12909/nginx: wo There are plenty of "CLOSE_WAIT" & "FIN_WAIT2" pairs as highlighted below (in above output): tcp 1337 0 localhost:9000 localhost:46680 CLOSE_WAIT - tcp 0 0 localhost:46680 localhost:9000 FIN_WAIT2 - Please note port 46680 in above. I enabled mysql slow queries error log, but it didn't work. As of now restarting php5-fpm every minute via a cronjob (see command below) keeping everything running "smoothly" but I hate patchwork and want to solve this... 1 * * * * service php5-fpm restart > /dev/null I searched extensively on Google - got no help. As mentioned, this a test-server in LAN, CPU load is never crossed 0.10 and memory usage is also below 25% (System has 2GB RAM and ubuntu-server installed) So if you find its time-confusing to help me out, please atleast drop a hint. Thanks in advance for help. -Rahul (note - this is reposting of - http://forum.nginx.org/read.php?11,127694) Update: I found answer, which is posted below.

    Read the article

  • Move SQL-Server Database with zero downtime

    - by Uwe
    Hello, how is it possible to move a sql 2005 db to a different sql 2008(!) server without any downtime? The system is 24/7 and has to be moved to a differen server with a different storage. We tried copy database, but that does not keep the whole db synchronus at the end of the process but only tablewise.

    Read the article

  • Nginx + PHP-FPM Timeouts, almost zero load consumption?

    - by javipas
    I've got a server running on a Linode with Ubuntu 10.04 LTS, Nginx 0.7.65, MySQL 5.1.41 and PHP 5.3.2 with PHP-FPM. There is a WordPress blog on it, updated to WordPress 3.2.1 recently. I have made no changes to the server (except updating WordPress) and while it was running fine, a couple of days ago I started having downtimes. I tried to solve the problem, and checking the error_log I saw many timeouts and messages that seemed to be related to timeouts. The server is currently logging this kind of errors: 2011/07/14 10:37:35 [warn] 2539#0: *104 an upstream response is buffered to a temporary file /var/lib/nginx/fastcgi/2/00/0000000002 while reading upstream, client: 217.12.16.51, server: www.mydomain.com, request: "GET /page/2/ HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.mydomain.com/" 2011/07/14 10:40:24 [error] 2539#0: *231 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 46.24.245.181, server: www.mydomain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.google.es/search?sourceid=chrome&ie=UTF-8&q=mydomain" and even saw this previous serverfault discussion with a possible solution: to edit /etc/php/etc/php-fpm.conf and change request_terminate_timeout=30s instead of ;request_terminate_timeout= 0 The server worked for some hours, and then broke again. I edited the file again to leave it as it was, and restarted again php-fpm (service php-fpm restart) but no luck: the server worked for a few minutes and back to the problem over and over. The strange thing is, although the services are running, htop shows there is no CPU load (see image) and I really don't know how to solve the problem. The config files are on pastebin The php-fpm.conf file is here The /etc/nginx/nginx.conf is here The /etc/nginx/sites-available/www.mydomain.com is here Please help :(

    Read the article

  • Amazon EC2 Elastic Load Balancing - strategy for zero downtime server restart

    - by Yoga
    I have 5 web servers (Apache/mod_perl) behind Amazon EC2 Elastic Load Balancing, when I deploy codes to the web servers, I am doing this.. For each machine, shutdown the Apache Update the code Start over the server and proceed to the next server I think when my server is shutdown, ELB will not distribute request to my server, but how about the request still serving? I think a better approach is Stop accepting new request from ELB Sleep for sometimes, shutdown web server only if all requests are responded Update the codes Start the server again But how to perform (1) and (2) from my local sever? Do I need to use AWS API? or other easy way to do it? Thanks.

    Read the article

  • Rapidly Deploy Oracle Applications with Oracle VM Templates

    - by monica.kumar
    Oracle today announced Oracle VM Templates for a number of Oracle Applications including Oracle E-Business Suite 12.1 Oracle's JD Edwards Enterprise One 9.0 Oracle's PeopleSoft 9.1 These Oracle VM Templates, based on Oracle Enterprise Linux, provide pre-installed and pre-configured enterprise software images that help eliminate the need to install new software from scratch, offering customers a time-saving approach to deploying a fully configured software stack. Learn more about Oracle VM Templates

    Read the article

  • What one-time-password devices are compatible with mod_authn_otp?

    - by netvope
    mod_authn_otp is an Apache web server module for two-factor authentication using one-time passwords (OTP) generated via the HOTP/OATH algorithm defined in RFC 4226. The developer's has listed only one compatible device (the Authenex's A-Key 3600) on their website. If a device is fully compliant with the standard, and it allows you to recover the token ID, it should work. However, without testing, it's hard to tell whether a device is fully compliant. Have you ever tried other devices (software or hardware) with mod_authn_otp (or other open source server-side OTP program)? If yes, please share your experience :)

    Read the article

  • Unrated Easy iOS 6.1.4/6.1.3 Unlock/Jailbreak iPhone 5/4S/4/3GS Untehtered System

    - by user171772
    Popular jailbreak tool Unlock-Jailbreak.net – compiled by the iPhone Team – has just been updated with full support for Unlock/Jailbreak iPhone 5/4S/4/3GS iOS 6.1.4 and 6.1.3/6.0.1 Untethered. You may have caught our tutorial, which detailed how one could jailbreak their device tethered using Redsn0w, although since it was a pre-iOS 6.1.1 release, users needed to "point" the tool to the older firmware. Team Unlock-Jailbreak was established few years ago, combines some of the jailbreak and unlock community’s most talented developers all known for producing reliable jailbreaks in the past. This team was assembled in order to develop a reliable untethered jailbreak and unlock iphone 5,4S,4 iOS 6.1 for post-A5 devices, including the iPhone 5, the iPad mini and the latest-generation iPad. This has now been achieved with the just-released userland jailbreak tool, known as Unlock-Jailbreak.net. To Jailbreak and Unlock your iPhone 5/4/4S/3GS iOS 6.1.4 and 6.1.3 visit the official website http://www.Unlock-Jailbreak.net http://www.Unlock-Jailbreak.net was formed in mid 2008 and have successfully jailbroken over 250,000 iPhones worldwide. This is unparalleled by any other service in the industry. They have achieved this by combining a very simple solution with a fantastic customer service department that is available 24/7 through many forms of contact, including telephone. Unlock-Jailbreak from Unlock-Jailbreak.nethas been downloaded by over 250,000 customers located in over 145 countries. To further ensure customers of its products usability, Unlock-Jailbreak offers a 100% full money back guarantee on all orders. Customers dissatisfied with the company’s product will be given a full refund, no questions asked. One good advantage of the software is that the jailbreaking and unlocking process is coampletely reversible and there will be no evidence that the iPhone has been jailbroken and unlocked . iOS 6.1/6.1.4 and 6.1.3 comes with many new features and updates for multitasking and storage. By unlocking and jailbreaking the iPhone,Unlock/Jailbreak iPhone 5/4S/4/3GS iOS 6.1/6.1.4 and 6.1.3/6.0.1 Untethered unleash unlimited possibilities to improve this already fantastic experience and the iPhone FULL potential. Before going through any jailbreak process with Unlock-Jailbreak it is always good housekeeping to perform a full backup of all information on the device. It is unlikely that anything will go wrong during the process but when undertaking any process that modifies the internals of a file system it is always prudent to err on the side of caution.

    Read the article

  • Remove Google+ company page after verification?

    - by JohnJ
    My personal Google+ page has a lot of comments and posts that I'd like to keep. However, I've only just finished validating the business page as well (Google+ company page) in the hopes of improving traffic. Now that it's been validated, can I remove the Google+ Company link and switch it back to my original personal Google+ page? Scenario: My client is an accountant (one man shop) and so both personal and company Google+ pages would work.

    Read the article

  • Is syncing private keys a good idea?

    - by Jacob Johan Edwards
    Ubuntu One's Security FAQ indicates that Canonical encrypts connections and restricts access to user data. This all well and fine, and I do trust SSL for online banking and other things more valuable than my private keys. That said, I am quite anxious about putting my ~/.ssh/id_dsa up in the cloud. Obviously, no system is totally secure. Could some knowledgeable party, then, pragmatically quantify the risks?

    Read the article

  • why jamendo is off stores of music player banshee and rhythmbox and how can i add jamendo to the stores?

    - by user49523
    jamendo is open music platform for artists and listers, i know that debian was on jamendo on the stores on rhythmbox by default, and also ubuntu use to have so why is not been there anymore on ubuntu ? is it because of ubuntu one ? and how can i add jamendo to the music players like rhythmbox and banshee ? magnatune store is on rhytmbox plugins bu t not on banshee ? can we expect jamendo to be include on next release , that i am using beta version, of ubuntu 12.04?

    Read the article

  • Cleanest/One-liner way to require all files in directory in Ruby?

    - by viatropos
    When creating gems, I often have a directory structure like this: |--lib |-- helpers.rb `-- helpers |-- helper_a.rb `-- helper_b.rb Inside the helpers.rb, I'm just require-ing the files in the helpers directory. But I have to do things like this: $:.push(File.dirname(__FILE__) + '/helpers') require 'helper_a' require 'helper_b' Is there a way to make that one line so I never have to add to it? I just came up with this real quick: dir = File.join(File.dirname(__FILE__), "helpers") Dir.entries(dir)[2..-1].each { |file| require "#{dir}/#{file[0..-4]}" } But it's two lines and ugly. What's a slick one liner to solve this problem?

    Read the article

  • "Download failed, tap to resume"

    - by user96142
    I have shared folders on my Laptop System76 Pangolin Performance and installed Ubuntu One on my Motorola Droid RAZR. The folders show up on my Droid, however when I try to view them by tapping on them I get the following error, "Download failed, tap to resume". I have Googled for solutions and I have uninstalled and reinstalled the app on my t still have the problem. Does anyone have any suggestions?

    Read the article

  • Graphic crash after Ubuntu 12.04 minor update

    - by user68753
    My problem came after doing a normal packages update in 12.04 and then reboot to never been able to boot normally, it appear the "The system is running in low-graphics mode" window and then I tried to fix the problem reinstalling the gmd. Now I'm reinstalling the whole system, from the USB boot it went everything as usual before that update. I just want to never have to reinstall Ubuntu in this way again. Any thoughts? I'm using Ubuntu 12.04 LTS on an Acer Aspire One D255

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Nhibernate Migration from 1.0.2.0 to 2.1.2 and many-to-one save problems

    - by Meska
    Hi, we have an old, big asp.net application with nhibernate, which we are extending and upgrading some parts of it. NHibernate that was used was pretty old ( 1.0.2.0), so we decided to upgrade to ( 2.1.2) for the new features. HBM files are generated through custom template with MyGeneration. Everything went quite smoothly, except for one thing. Lets say we have to objects Blog and Post. Blog can have many posts, so Post will have many-to-one relationship. Due to the way that this application operates, relationship is done not through primary keys, but through Blog.Reference column. Sample mapings and .cs files: <?xml version="1.0" encoding="utf-8" ?> <id name="Id" column="Id" type="Guid"> <generator class="assigned"/> </id> <property column="Reference" type="Int32" name="Reference" not-null="true" /> <property column="Name" type="String" name="Name" length="250" /> </class> <?xml version="1.0" encoding="utf-8" ?> <id name="Id" column="Id" type="Guid"> <generator class="assigned"/> </id> <property column="Reference" type="Int32" name="Reference" not-null="true" /> <property column="Name" type="String" name="Name" length="250" /> <many-to-one name="Blog" column="BlogId" class="SampleNamespace.BlogEntity,SampleNamespace" property-ref="Reference" /> </class> And class files class BlogEntity { public Guid Id { get; set; } public int Reference { get; set; } public string Name { get; set; } } class PostEntity { public Guid Id { get; set; } public int Reference { get; set; } public string Name { get; set; } public BlogEntity Blog { get; set; } } Now lets say that i have a Blog with Id 1D270C7B-090D-47E2-8CC5-A3D145838D9C and with Reference 1 In old nhibernate such thing was possible: //this Blog already exists in database BlogEntity blog = new BlogEntity(); blog.Id = Guid.Empty; blog.Reference = 1; //Reference is unique, so we can distinguish Blog by this field blog.Name = "My blog"; //this is new Post, that we are trying to insert PostEntity post = new PostEntity(); post.Id = Guid.NewGuid(); post.Name = "New post"; post.Reference = 1234; post.Blog = blog; session.Save(post); However, in new version, i get an exception that cannot insert NULL into Post.BlogId. As i understand, in old version, for nhibernate it was enough to have Blog.Reference field, and it could retrieve entity by that field, and attach it to PostEntity, and when saving PostEntity, everything would work correctly. And as i understand, new NHibernate tries only to retrieve by Blog.Id. How to solve this? I cannot change DB design, nor can i assign an Id to BlogEntity, as objects are out of my control (they come prefilled as generic "ojbects" like this from external source)

    Read the article

  • How do I left join tables in unidirectional many-to-one in Hibernate?

    - by jbarz
    I'm piggy-backing off of http://stackoverflow.com/questions/2368195/how-to-join-tables-in-unidirectional-many-to-one-condition. If you have two classes: class A { @Id public Long id; } class B { @Id public Long id; @ManyToOne @JoinColumn(name = "parent_id", referencedColumnName = "id") public A parent; } B - A is a many to one relationship. I understand that I could add a Collection of Bs to A however I do not want that association. So my actual question is, Is there an HQL or Criteria way of creating the SQL query: select * from A left join B on (b.parent_id = a.id) This will retrieve all A records with a Cartesian product of each B record that references A and will include A records that have no B referencing them. If you use: from A a, B b where b.a = a then it is an inner join and you do not receive the A records that do not have a B referencing them. I have not found a good way of doing this without two queries so anything less than that would be great. Thanks.

    Read the article

  • why when I delete a parent on a one to many relationship on grails the beforeInsert event is called

    - by nico
    hello, I have a one to many relationship and when I try to delete a parent that haves more than one child the berforeInsert event gets called on the frst child. I have some code in this event that I mean to call before inserting a child, not when i'm deleting the parent! any ideas on what might be wrong? the entities: class MenuItem { static constraints = { name(blank:false,maxSize:200) category() subCategory(nullable:true, validator:{ val, obj -> if(val == null){ return true }else{ return obj.category.subCategories.contains(val)? true : ['invalid.category.no.subcategory'] } }) price(nullable:true) servedAtSantaMonica() servedAtWestHollywood() highLight() servedAllDay() dateCreated(display:false) lastUpdated(display:false) } static mapping = { extras lazy:false } static belongsTo = [category:MenuCategory,subCategory:MenuSubCategory] static hasMany = [extras:MenuItemExtra] static searchable = { extras component: true } String name BigDecimal price Boolean highLight = false Boolean servedAtSantaMonica = false Boolean servedAtWestHollywood = false Boolean servedAllDay = false Date dateCreated Date lastUpdated int displayPosition void moveUpDisplayPos(){ def oldDisplayPos = MenuItem.get(id).displayPosition if(oldDisplayPos == 0){ return }else{ def previousItem = MenuItem.findByCategoryAndDisplayPosition(category,oldDisplayPos - 1) previousItem.displayPosition += 1 this.displayPosition = oldDisplayPos - 1 this.save(flush:true) previousItem.save(flush:true) } } void moveDownDisplayPos(){ def oldDisplayPos = MenuItem.get(id).displayPosition if(oldDisplayPos == MenuItem.countByCategory(category) - 1){ return }else{ def nextItem = MenuItem.findByCategoryAndDisplayPosition(category,oldDisplayPos + 1) nextItem.displayPosition -= 1 this.displayPosition = oldDisplayPos + 1 this.save(flush:true) nextItem.save(flush:true) } } String toString(){ name } def beforeInsert = { displayPosition = MenuItem.countByCategory(category) } def afterDelete = { def otherItems = MenuItem.findAllByCategoryAndDisplayPositionGreaterThan(category,displayPosition) otherItems.each{ it.displayPosition -= 1 it.save() } } } class MenuItemExtra { static constraints = { extraOption(blank:false, maxSize:200) extraOptionPrice(nullable:true) } static searchable = true static belongsTo = [menuItem:MenuItem] BigDecimal extraOptionPrice String extraOption int displayPosition void moveUpDisplayPos(){ def oldDisplayPos = MenuItemExtra.get(id).displayPosition if(oldDisplayPos == 0){ return }else{ def previousExtra = MenuItemExtra.findByMenuItemAndDisplayPosition(menuItem,oldDisplayPos - 1) previousExtra.displayPosition += 1 this.displayPosition = oldDisplayPos - 1 this.save(flush:true) previousExtra.save(flush:true) } } void moveDownDisplayPos(){ def oldDisplayPos = MenuItemExtra.get(id).displayPosition if(oldDisplayPos == MenuItemExtra.countByMenuItem(menuItem) - 1){ return }else{ def nextExtra = MenuItemExtra.findByMenuItemAndDisplayPosition(menuItem,oldDisplayPos + 1) nextExtra.displayPosition -= 1 this.displayPosition = oldDisplayPos + 1 this.save(flush:true) nextExtra.save(flush:true) } } String toString(){ extraOption } def beforeInsert = { if(menuItem){ displayPosition = MenuItemExtra.countByMenuItem(menuItem) } } def afterDelete = { def otherExtras = MenuItemExtra.findAllByMenuItemAndDisplayPositionGreaterThan(menuItem,displayPosition) otherExtras.each{ it.displayPosition -= 1 it.save() } } }

    Read the article

  • What software development process should I learn first for a solo project?

    - by Omar Kohl
    I want to develop a project on my own (if it is sucessful more people might start working on it too). Also I want to apply some proper software engineering from the first until the last day. On one hand just to try it out and compare results with previous projects that were just about writing code quick and dirty, and on the other hand to learn! I know the proper answer to this question is "It depends very much on the project...", "There is no single correct answer...". But I just need someplace to start, somewhere where every step is written down and tells me what to do. If I'm not happy next time I'll try something else. So, how/where should I start? I would love to hear some book suggestions cause I'm all about books :-D.

    Read the article

  • Ubuntuone fails to sync with 'File Sync starting...' displayed

    - by a different ben
    I am on 12.04 using ubuntuone-client 3.0.1-0ubuntu1.0.1. I actually have two machines that I sync with, having the same Ubuntu version and ubuntuone-client version. One is fine, the other is not. File sync has frozen within a user-defined folder under my home folder. The graphical client reports in the top-right corner: 'File Sync starting...', but this doesn't change. I have two files with changes that show a syncing overlay in Nautilus. They are both very small text files. Here are some details: harb@joan:~$ u1sdtool --status State: READY connection: With User Not Network description: ready to connect is_connected: False is_error: False is_online: False queues: WORKING harb@joan:~$ u1sdtool --current-transfers Current uploads: 0 Current downloads: 0 The status seems to suggest that I am not connected to a network, however I am connected to a network - in fact I am accessing this machine via NX. Is it not working because I am connected via NX? Happy to provide other info, just not sure what would be useful.

    Read the article

  • High I/O wait after login

    - by Jackson Tan
    I've noticed that the ubuntuone-syncdaemon hogs up the hard disk every time I log in to Ubuntu (10.04). This takes up to two or three minutes, which makes Ubuntu insufferably slow. Opening Firefox is okay, but the browser is constantly greyed out and lags horribly. Given that I often shut down my laptop when I don't use it (about 3 to 4 times a day), this makes Ubuntu lose much of its lustre because of its long boot time. Is this a normal behaviour of Ubuntu One? Is it intended? Note that I've actually posted this in the forums here, but I received only few replies.

    Read the article

  • My ubuntuone is broken, what is the problem?

    - by user6962
    I am on 10.10 and u1sdtool seems to be completely broken. I reinstalled it, no change. I can add my PC to the account, but it is added twice(!) every time I tried. So I have my netbook and two times my PC in the account. The netbook with 10.04 has no problems. Below is the error msg I get when attempting to startup Ubuntu One on the command line. desktop:~$ u1sdtool --status Oops, an error ocurred: Traceback (most recent call last): Failure: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. desktop:~$ Starting it from the Me Menu has the same effect, the HDD will get really busy for a minute and then nothing happens, the client will not start. Nothing in the syslog or anywhere.

    Read the article

  • U1 Music Streaming: Is it possible to have album cover art for OGG files?

    - by Will Daniels
    Just started trying out the U1 music streaming service and so far very pleased. The one issue I have that could be a deal-breaker when it comes time to pay up is that half of my collection is OGG Vorbis and I cannot find a way to show album art for OGG files. I already tried adding a cover.jpg to the folder and embedding the image via easyTAG (works for MP3 but not OGG). Does anybody have a solution besides transcoding them all to MP3? Will this likely be supported in future?

    Read the article

  • Mobile music playback on android phone keeps buffering

    - by ianio
    I have copied some music to a ubuntu synced folder on my karmic machine. I have the Ubuntu One android app installed on android 2.2 phone. When listening to music I get frequent breaks whilst the music is buffering, even with full signal. every other music streaming servie such as last fm and bbc player are fine. What is going wrong? mp3s should not have trouble buffering over a good 3g signal. I have set cache size to 200mb but i cant find a buffer size. Is it my mp3s? they are usually in the order of 320kb/s this is a breaker for me if there is no solution which is a massive shame as i like the principal of the software. cheers

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >