Search Results

Search found 55268 results on 2211 pages for 'startup error'.

Page 483/2211 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • Constant crashes in windows 7 64bit when playing games

    - by yx
    I've tried everything I can possibly think of in trying to fix this problem and I'm totally out of ideas, so any help would be appreciated: The problem: whenever I fire up a game, it works for a short while with no problems and then it would crash. Either its a hard crash, forcing me to reboot, or windows would report that the display driver has stopped working and recovered. Here is a list of things I've already tried: Drivers - tried the latest drivers (catalyst 9.12) as well as the stock drivers that came with the video card. Also have the latest BIOS/chipset Memtest - Ran Memtest86+ overnight, had no problems, the windows diagnostic tool also does not find any problems. Overheating - Video card/cpu temperatures are well below peak (42 and 31 Celsius receptively) PSU Voltage - CPUID shows that the voltage levels are all above what they should be. The PSU itself is only roughly 16 months old and is a good model. HDD - No errors when checked GPU - Brand new (replaced previous card since I thought it was the problem, apparently not) Overclocking - Everything is at stock levels, memory voltage is set to manufacturer's standard Specs: Motherboard: ASUS P5Q Pro CPU: Core 2 Duo E8400 3.0 ghz OS: Windows 7 home premium 64 bit Memory: Mushkin Enhanced 4GB DDR2 GPU: Sapphire HD 5850 1GB PSU: SeaSonic M12 600W ATX12V DirectX: DX11 Event Viewer after a crash always has these logged: A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 1 The details view of this entry contains further information. A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 0 The details view of this entry contains further information. A previous card that I had (4850x2) also had these errors, so I changed video cards, but the same thing is happening.

    Read the article

  • Can't install mysql 5.1 on a windows machine because the last install left artifacts.

    - by Zombies
    After uninstalling mysql 5.1 (64 bit version) I cannot install the win32 version! Apparently the devs felt it neccasery to leave helpful artifacts behind? I have rebooted my machine but no effect.. Running this: C:\Users\User1>net start mysql The MySQL service is starting. The MySQL service could not be started. A system error has occurred. System error 1067 has occurred. The process terminated unexpectedly. And ran this: C:\Program Files (x86)\MySQL\MySQL Server 5.1\bin>mysqld --console 100213 10:52:58 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Error: log file .\ib_logfile0 is of different size 0 10485760 bytes InnoDB: than specified in the .cnf file 0 25165824 bytes! 100213 10:52:59 [ERROR] Plugin 'InnoDB' init function returned error. 100213 10:52:59 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100213 10:52:59 [ERROR] Unknown/unsupported table type: INNODB 100213 10:52:59 [ERROR] Aborting 100213 10:52:59 [Note] mysqld: Shutdown complete Update: For some reason it looks like it is installing the 32bit DB into the old 64bit directoy.... will look into this... (the bin directory is going into the 32 bit program files directory).

    Read the article

  • Updating and deleting java (red hat / centos) (new)

    - by JochemTheSchoolKid
    I am a total noob with linux. So please explain clearly if you have a solution for me. I have an VPS and I want to update JAVA. I found a guide on the Java site which says: rpm -e < package_name I searched for the packages: [root@srv1 ~]# rpm -qa | grep java java_cup-0.10k-5.el6.x86_64 java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 Than I tried to do the delete command [root@srv1 ~]# rpm -e java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 error: Failed dependencies: java-gcj-compat is needed by (installed) java_cup-1:0.10k-5.el6.x86_64 java-gcj-compat >= 1.0.70 is needed by (installed) sinjdoc-0.5-9.1.el6.x86_64 What should I do now? Removing has worked thanks to the answer below Problem two! Now am I installing this package from java [root@srv1 java]# rpm -ivh jre-7u9-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/rt.pack jsse.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/jsse.pack charsets.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/charsets.pack localedata.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/ext/localedata.pack plugin.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/plugin.pack javaws.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/javaws.pack deploy.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/deploy.pack Can someone help me again with this?

    Read the article

  • How do I cancel windows server 2003 repair install?

    - by Kilgore2k
    System: Windows 2003 Server Enterprise Scenario: NTDS db is corrupt and all attempts to fix with esentutl fail. Ran chkdsk which seemed to repair disk error and give access to the ntds.dit file but still esentutl fails. (Attached the drive to a different server to run the esentutl) Error: Access to source database '[path to copy of]/ntds.dit' failed with Jet error -1022. Operation terminated with error -1022 (JET_errDiskIO, Disk IO error) after 0.170 seconds. This error occurs on any disk I cpoy the files to including original location in C:\WINDOWS\NTDS\ Now enter the "Stupid!" and "what was I thinking!?" part (must be the late hour...) Stupid: No updated backup - after using a backup I get a network password error in the lsass error. what was I thinking!?: Started the install repair from the original CD but the install fails since the AD fails to start. Now I cant boot into any mode (safe mode, AD restore etc) nor complete the repair install. I would really like to avoid a fresh install since I have the Exchange server on this DC and would rather migrate to a new server than have to start from scratch. Thanks!

    Read the article

  • Constant crashes in windows 7 64bit when playing games

    - by yx.
    I've tried everything I can possibly think of in trying to fix this problem and I'm totally out of ideas, so any help would be appreciated: The problem: whenever I fire up a game, it works for a short while with no problems and then it would crash. Either its a hard crash, forcing me to reboot, or windows would report that the display driver has stopped working and recovered. Here is a list of things I've already tried: Drivers - tried the latest drivers (catalyst 9.12) as well as the stock drivers that came with the video card. Also have the latest BIOS/chipset Memtest - Ran Memtest86+ overnight, had no problems, the windows diagnostic tool also does not find any problems. Overheating - Video card/cpu temperatures are well below peak (42 and 31 Celsius receptively) PSU Voltage - CPUID shows that the voltage levels are all above what they should be. The PSU itself is only roughly 16 months old and is a good model. HDD - No errors when checked GPU - Brand new (replaced previous card since I thought it was the problem, apparently not) Overclocking - Everything is at stock levels, memory voltage is set to manufacturer's standard Specs: Motherboard: ASUS P5Q Pro CPU: Core 2 Duo E8400 3.0 ghz OS: Windows 7 home premium 64 bit Memory: Mushkin Enhanced 4GB DDR2 GPU: Sapphire HD 5850 1GB PSU: SeaSonic M12 600W ATX12V DirectX: DX11 Event Viewer after a crash always has these logged: A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 1 The details view of this entry contains further information. A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Bus/Interconnect Error Processor ID: 0 The details view of this entry contains further information. A previous card that I had (4850x2) also had these errors, so I changed video cards, but the same thing is happening.

    Read the article

  • Why won't sql server express 2008 service restart after I enable TCP/IP Protocol?

    - by John
    Whenever I enable TCP/IP connections on my SQL Server Express 2008 database server running on Windows XP SP3, I cannot restart the service, it simply states "The request failed or did respond in a timely fashion". Any suggestions of what I may have configured incorrectly? [update] Here is the applicable part of the Error Log: MSSQL$SQLEXPRESS Server failed to list on 'any' 3060. Error: 0x2747. To proceed, notify you system administrator. MSSQL$SQLEXPRESS TDSSNIClient initialization failed with error 0x2747, status code 0xa. Reason: Unable to initialize the TCP/IP listener. An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full. MSSQL$SQLEXPRESS TDSSNIClient initialization failed with error 0x2747, status code 0x1. Reason: Initialization failed with an infrastructure error. Check for previous errors. An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full. MSSQL$SQLEXPRESS Could not start the network library because of an internal error in the network library. To determine the cause, review the errors immediately preceding this one in the error log. MSSQL$SQLEXPRESS SQL Server could not spawn FRunCM thread. Check the SQL Server error log and the Windows event logs for information about possible related problems.

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • WD MBWE II (White Strip Light) 2TB - unable to access data

    - by user210477
    I have a WD MBWE II (White Strip Light) 2TB - (WD20000H2NC-00) Was working fine until a few days ago. I guess there was a power failure and after that I am unable to access the 'Public' or the 'Download' folder anymore. I have been searching for answers everywhere but came up empty handed. Web GUI still works, SSH works. I hooked up both the drives on my PC and UFS Explorer sees the drive. But so far I am unable to retrieve any of my data. I do not remember what RAID setting I used when I first got the drive. I can see from GUI that it is set as "Stripe". The drive contains 10 years of family pictures which I really do not want to loose. Sadly and stupidly, I didn't even keep a backup of this drive. Can somebody please help or point me in the right direction. Thank you in advance for your help. Disk Utility on Ubuntu reports 1405 bad sectors on one drive. How can I retrieve my data? Please help. Logs below: ~ # mdadm --detail /dev/md[012345678] /dev/md0: Version : 0.90 Creation Time : Wed Jul 15 08:36:17 2009 Raid Level : raid1 Array Size : 1959872 (1914.26 MiB 2006.91 MB) Used Dev Size : 1959872 (1914.26 MiB 2006.91 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:53:29 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 04f7a661:98983b3b:26b29e4f:9b646adb Events : 0.266 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 /dev/md1: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 256896 (250.92 MiB 263.06 MB) Used Dev Size : 256896 (250.92 MiB 263.06 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 30 22:08:21 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : aaa7b859:c475312d:efc5a766:6526b867 Events : 0.10 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 /dev/md3: Version : 0.90 Creation Time : Wed Jul 15 08:36:18 2009 Raid Level : raid1 Array Size : 987904 (964.91 MiB 1011.61 MB) Used Dev Size : 987904 (964.91 MiB 1011.61 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:26:33 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 3f4099f2:72e6171b:5ba962fd:48464a62 Events : 0.54 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 mdadm: md device /dev/md4 does not appear to be active. mdadm: md device /dev/md5 does not appear to be active. mdadm: md device /dev/md6 does not appear to be active. mdadm: md device /dev/md7 does not appear to be active. mdadm: md device /dev/md8 does not appear to be active. ~ # cat /etc/mtab securityfs /sys/kernel/security securityfs rw 0 0 /dev/md2 /DataVolume xfs rw,usrquota 0 0 /dev/md4 /ExtendVolume xfs rw,usrquota 0 0 ~ # df -k Filesystem 1k-blocks Used Available Use% Mounted on /dev/md0 1929044 145092 1685960 8% / /dev/md3 972344 123452 799500 13% /var /dev/ram0 63412 20 63392 0% /mnt/ram ~ # mdadm -D /dev/md2 /dev/md2: Version : 0.90 Creation Time : Sat Sep 25 10:01:26 2010 Raid Level : raid0 Array Size : 1947045760 (1856.85 GiB 1993.77 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Fri Nov 1 13:30:53 2013 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 01dae60a:6831077b:77f74530:8680c183 Events : 0.97 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 ~ # mdadm -D /dev/md4 mdadm: md device /dev/md4 does not appear to be active. ~ # mount /dev/root on / type ext3 (rw,noatime,data=ordered) proc on /proc type proc (rw) sys on /sys type sysfs (rw) /dev/pts on /dev/pts type devpts (rw) securityfs on /sys/kernel/security type securityfs (rw) /dev/md3 on /var type ext3 (rw,noatime,data=ordered) /dev/ram0 on /mnt/ram type tmpfs (rw) ~ # cat /var/log/messages Oct 29 18:04:50 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:04:59 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 29 18:17:45 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 29 18:17:53 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 00:50:11 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 00:50:19 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 16:29:47 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 16:30:00 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 18:27:22 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 18:27:30 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:06:03 shmotashNAS daemon.warn wixEvent[3462]: Network Link - NIC 1 link is down. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 19:06:10 shmotashNAS daemon.info wixEvent[3462]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 19:14:58 shmotashNAS daemon.warn wixEvent[3462]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 19:20:05 shmotashNAS daemon.alert wixEvent[3462]: Thermal Alarm - System temperature exceeded threshold.(66 degrees) Oct 30 19:58:29 shmotashNAS daemon.alert wixEvent[3462]: HDD SMART - HDD 1 SMART Health Status: Failed. Oct 30 22:05:39 shmotashNAS daemon.info init: Starting pid 13043, console /dev/null: '/usr/bin/killall' Oct 30 22:05:39 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:08:09 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:08:09 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:19 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:25 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:37 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:44 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:08:46 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:08:46 shmotashNAS syslog.info miocrawler: ****** database does not exist. ret = -1, creating path Oct 30 22:08:49 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:08:50 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:08:51 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:08:52 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:08:57 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4605, console /dev/null: '/bin/touch' Oct 30 22:09:10 shmotashNAS daemon.info init: Starting pid 4607, console /dev/ttyS0: '/sbin/getty' Oct 30 22:09:10 shmotashNAS daemon.info wixEvent[3557]: System Startup - System startup. Oct 30 22:09:16 shmotashNAS daemon.warn wixEvent[3557]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 22:14:14 shmotashNAS daemon.warn wixEvent[3557]: Network Link - NIC 1 link is down. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:14:21 shmotashNAS daemon.info wixEvent[3557]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:29:36 shmotashNAS daemon.warn wixEvent[3557]: System Reboot - System will reboot. Oct 30 22:29:40 shmotashNAS daemon.info init: Starting pid 5974, console /dev/null: '/usr/bin/killall' Oct 30 22:29:40 shmotashNAS syslog.info System log daemon exiting. Oct 30 22:47:56 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 22:47:56 shmotashNAS daemon.warn wixEvent[3461]: Network Link - NIC 1 link is down. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 22:48:02 shmotashNAS daemon.info wixEvent[3461]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 22:48:09 [Version 01.09.00.96] ++++++++++++++ Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 22:48:09 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 22:48:10 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 22:48:11 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4079, console /dev/null: '/bin/touch' Oct 30 22:48:27 shmotashNAS daemon.info init: Starting pid 4080, console /dev/ttyS0: '/sbin/getty' Oct 30 22:48:28 shmotashNAS daemon.info wixEvent[3461]: System Startup - System startup. Oct 30 22:49:01 shmotashNAS daemon.warn wixEvent[3461]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 30 23:51:11 shmotashNAS daemon.warn wixEvent[3461]: System Reboot - System will reboot. Oct 30 23:51:16 shmotashNAS daemon.info init: Starting pid 6498, console /dev/null: '/usr/bin/killall' Oct 30 23:51:16 shmotashNAS syslog.info System log daemon exiting. Oct 30 23:54:19 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 30 23:55:37 shmotashNAS daemon.info wixEvent[3476]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:30 - 23:55:44 [Version 01.09.00.96] ++++++++++++++ Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 30 23:55:44 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 30 23:55:45 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 30 23:55:46 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4115, console /dev/null: '/bin/touch' Oct 30 23:55:58 shmotashNAS daemon.info init: Starting pid 4116, console /dev/ttyS0: '/sbin/getty' Oct 30 23:55:58 shmotashNAS daemon.info wixEvent[3476]: System Startup - System startup. Oct 30 23:56:33 shmotashNAS daemon.warn wixEvent[3476]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 00:29:14 shmotashNAS auth.info sshd[5409]: Server listening on 0.0.0.0 port 22. Oct 31 00:31:25 shmotashNAS auth.info sshd[5486]: Accepted password for root from 192.168.1.100 port 50785 ssh2 Oct 31 00:33:44 shmotashNAS auth.info sshd[5565]: Accepted password for root from 192.168.1.100 port 50817 ssh2 Oct 31 00:36:39 shmotashNAS daemon.info init: Starting pid 5680, console /dev/null: '/usr/bin/killall' Oct 31 00:36:39 shmotashNAS syslog.info System log daemon exiting. Oct 31 00:40:44 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network Link - NIC 1 link is up 100 Mbps full duplex. Oct 31 00:40:51 shmotashNAS daemon.info wixEvent[3464]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:10:31 - 00:41:00 [Version 01.09.00.96] ++++++++++++++ Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: mc_db_init ... Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Oct 31 00:41:00 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Oct 31 00:41:01 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === inotify init done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Oct 31 00:41:02 shmotashNAS syslog.info miocrawler: === Walking directory done. Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4101, console /dev/null: '/bin/touch' Oct 31 00:41:14 shmotashNAS daemon.info init: Starting pid 4102, console /dev/ttyS0: '/sbin/getty' Oct 31 00:41:15 shmotashNAS daemon.info wixEvent[3464]: System Startup - System startup. Oct 31 00:41:47 shmotashNAS daemon.warn wixEvent[3464]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Oct 31 01:13:19 shmotashNAS daemon.info init: Starting pid 5385, console /dev/null: '/usr/bin/killall' Oct 31 01:13:19 shmotashNAS syslog.info System log daemon exiting. Nov 1 13:26:25 shmotashNAS syslog.info syslogd started: BusyBox v1.1.1 Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network Link - NIC 1 link is up 100 Mbps full duplex. Nov 1 13:26:32 shmotashNAS daemon.info wixEvent[3471]: Network IP Address - NIC 1 use static IP address 192.168.1.102 Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2013:11:01 - 13:26:38 [Version 01.09.00.96] ++++++++++++++ Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: mc_db_init ... Nov 1 13:26:38 shmotashNAS syslog.info miocrawler: ++++++++ database exists: ret = 0 Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: === mc_db_init ...Done. Nov 1 13:26:39 shmotashNAS syslog.info miocrawler: mcUtilsInit() Creating free queue pool Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mcUtilsInit() Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === inotify init done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: mc_trans_updater_init() ... Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === mc_trans_updater_init() ...Done. Nov 1 13:26:40 shmotashNAS syslog.info miocrawler: === Walking directory done. Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4078, console /dev/null: '/bin/touch' Nov 1 13:26:52 shmotashNAS daemon.info init: Starting pid 4079, console /dev/ttyS0: '/sbin/getty' Nov 1 13:26:52 shmotashNAS daemon.info wixEvent[3471]: System Startup - System startup. Nov 1 13:27:28 shmotashNAS daemon.warn wixEvent[3471]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed. Nov 1 13:44:48 shmotashNAS auth.info sshd[5375]: Accepted password for root from 192.168.1.103 port 50217 ssh2 Nov 1 13:51:08 shmotashNAS auth.info sshd[5894]: Accepted password for root from 192.168.1.103 port 50380 ssh2

    Read the article

  • iPhone locationManager:didFailWithError problem when GPS disabled

    - by BenG
    So, I've followed other related threads, but for some reason I'm still having this error and I'm about ready to tear my hair out. I have implemented locationManager:didFailWithError to check and see if a user selects 'Don't Allow' to use the current location. -(void)locationManager:(CLLocationManager *)manager didFailWithError:(NSError *)error { NSLog(@"IN ERROR"); if ([error code] == kCLErrorDenied){ [manager stopUpdatingLocation]; } } However, the following error always appears when the user selects 'Don't Allow'...it's strange, especially the order that the text 'IN ERROR' appears. ERROR,Time,293420691.000,Function,"void CLClientHandleDaemonDataRegistration(__CLClient*, const CLDaemonCommToClientRegistration*, const __CFDictionary*)",server did not accept client registration 1 2010-04-19 21:44:51.000 testApp[1414:207] IN ERROR So, it's outputting this error even before it has a chance to get into the didFailWithError function. Does anyone have any ideas of what might be happening? The rest of the locationManager code is as follows: self.locationManager = [[[CLLocationManager alloc] init] autorelease]; locationManager.delegate = self; locationManager.desiredAccuracy = kCLLocationAccuracyKilometer; locationManager.distanceFilter = 2; [locationManager startUpdatingLocation];

    Read the article

  • how to rethrow same exception in sql server

    - by Shantanu Gupta
    I want to rethrow same exception in sql server that has been occured in my try block. I am able to throw same message but i want to throw same error. BEGIN TRANSACTION BEGIN TRY INSERT INTO Tags.tblDomain (DomainName, SubDomainId, DomainCode, Description) VALUES(@DomainName, @SubDomainId, @DomainCode, @Description) COMMIT TRANSACTION END TRY BEGIN CATCH declare @severity int; declare @state int; select @severity=error_severity(), @state=error_state(); RAISERROR(@@Error,@ErrorSeverity,@state); ROLLBACK TRANSACTION END CATCH RAISERROR(@@Error, @ErrorSeverity, @state); This line will show error, but i want functionality something like that. This raises error with error number 50000, but i want erron number to be thrown that i am passing @@error, I want to capture this error no at frontend i.e. catch (SqlException ex) { if ex.number==2627 MessageBox.show("Duplicate value cannot be inserted"); } I want this functionality. which can't be achieved using raiseerror. I dont want to give custom error message at back end.

    Read the article

  • ASP Classic page quit working

    - by justSteve
    I've had a set of legacy pages running on my IIS7 server for at least a year. Sometime last week something changed and now this line: Response.Write CStr(myRS(0).name) & "=" & Cstr(myRS(0).value) which used to return nothing more exciting than the string: 'Updated=true' (the sproc processing input params, stores them to a table, checks for errors and when that's all done returns a success code by executing this statement: select 'true' as [Updated] Now my pageside error handler is being involved and offers: myError=Error from /logQuizScore.asp Error source: Microsoft VBScript runtime error Error number: 13 Error description: Type mismatch Important to note that all lots of pages use the same framework - same db, same coding format, connecitonstrings and (so far as I can tell) all others are working. Troubleshot to this point: The call to the stored procedure is working correctly (stuff is stored to the given table). The output from the stored procedure is working correctly (i can execute a direct call with the given parameters and stuff works. I can see profiler calling and passing. I can replace all code with 'select 'true' as updated' and the error is the same. everything up to the response.write statement above is correct. So something changed how ADO renders that particular recordset. So i try: Response.Write myRS.Item.count and get: Error number: 424 Error description: Object required The recordset object seems not to be instantiating but the command object _did execute. Repeat - lots of other pages just the same basic logic to hit other sprocs without a problem. full code snippet set cmd1 = Server.CreateObject("ADODB.Command") cmd1.ActiveConnection = MM_cnCompliance4_STRING cmd1.CommandText = "dbo._usp_UserAnswers_INSERT" ... cmd1.CommandType = 4 cmd1.CommandTimeout = 0 cmd1.Prepared = true set myRS = cmd1.Execute Response.Write CStr(myRS(0).name) & "=" & Cstr(myRS(0).value)

    Read the article

  • C++ Beginner - Trouble using classes inside of classes

    - by Francisco P.
    Hello, I am working on a college project, where I have to implement a simple Scrabble game. I have a player class (containing a Score and the player's hand, in the form of a std::string, and a score class (containing a name and numeric (int) score). One of Player's member-functions is Score getScore(), which returns a Score object for that player. However, I get the following error on compile time: player.h(27) : error C2146: syntax error : missing ';' before identifier 'getScore' player.h(27) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int player.h(27) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int player.h(27) : warning C4183: 'getScore': missing return type; assumed to be a member function returning 'int' player.h(35) : error C2146: syntax error : missing ';' before identifier '_score' player.h(35) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int player.h(35) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Here's lines 27 and 35, respectively: Score getScore(); //defined as public (...) Score _score; //defined as private I get that the compiler is having trouble recognizing Score as a valid type... But why? I have correctly included Score.h at the beginning of player.h: #include "Score.h" #include "Deck.h" #include <string> I have a default constructor for Score defined in Score.h: Score(); //score.h //score.cpp Score::Score() { _name = ""; _points = 0; } Any input would be appreciated! Thanks for your time, Francisco EDIT: As requested, score.h and player.h: http://pastebin.com/3JzXP36i http://pastebin.com/y7sGVZ4A

    Read the article

  • How can I update the album art path using contentResolver?

    - by Ungureanu Liviu
    Hi! I want to update/insert a new image for an album in MediaStore but i can't get it work.. This is my code: public void updateAlbumImage(String path, int albumID) { ContentValues values = new ContentValues(); values.put(MediaStore.Audio.Albums.ALBUM_ART, path); int n = contentResolver.update(MediaStore.Audio.Albums.EXTERNAL_CONTENT_URI, values, MediaStore.Audio.Albums.ALBUM_ID + "=" + albumID, null); Log.e(TAG, "updateAlbumImage(" + path + ", " + albumID + "): " + n); } The error is: 03-24 03:09:46.323: ERROR/AndroidRuntime(5319): java.lang.UnsupportedOperationException: Unknown or unsupported URL: content://media/external/audio/albums 03-24 03:09:46.323: ERROR/AndroidRuntime(5319): at android.database.DatabaseUtils.readExceptionFromParcel(DatabaseUtils.java:131) 03-24 03:09:46.323: ERROR/AndroidRuntime(5319): at android.database.DatabaseUtils.readExceptionFromParcel(DatabaseUtils.java:111) 03-24 03:09:46.323: ERROR/AndroidRuntime(5319): at android.content.ContentProviderProxy.update(ContentProviderNative.java:405) 03-24 03:09:46.323: ERROR/AndroidRuntime(5319): at android.content.ContentResolver.update(ContentResolver.java:554) 03-24 03:09:46.323: ERROR/AndroidRuntime(5319): at com.liviu.app.smpp.managers.AudioManager.updateAlbumImage(AudioManager.java:563) 03-24 03:09:46.323: ERROR/AndroidRuntime(5319): at com.liviu.app.smpp.ShowAlbumsActivity.saveImageFile(ShowAlbumsActivity.java:375) 03-24 03:09:46.323: ERROR/AndroidRuntime(5319): at com.liviu.app.smpp.ShowAlbumsActivity.onClick(ShowAlbumsActivity.java:350) Thank you!

    Read the article

  • Github file size limit changed 6/18/13. Can't push now

    - by slindsey3000
    How does this change as of June 18, 2013 affect my existing repository with a file that exceeds that limit? I last pushed 2 months ago with a large file. I have a large file that I have removed locally but I can not push anything now. I get a "remote error" ... remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB I added the file to .gitignore after original push... But it still exists on remote (origin) Removing it locally should get rid of it at origin(Github) right? ... but ... it is not letting me push because there is a file on Github that exceeds the limit... https://github.com/blog/1533-new-file-size-limits These are the commands I issued plus error messages.. git add . git commit -m "delete cron_log.log" git push origin master remote: Error code: 40bef1f6653fd2410fb2ab40242bc879 remote: warning: Error GH413: Large files detected. remote: warning: See http://git.io/iEPt8g for more information. remote: error: File cron_log.log is 141.41 MB; this exceeds GitHub's file size limit of 100 MB remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB To https://github.com/slinds(omited_here)/linexxxx(omited_here).git ! [remote rejected] master - master (pre-receive hook declined) error: failed to push some refs to 'https://github.com/slinds(omited_here) I then tried things like git rm cron_log.log git rm --cached cron_log.log Same error.

    Read the article

  • RMagick installation failed in Ubuntu 9.04

    - by Marc Vitalis
    I already installed the following: imagemagick libmagickwand-dev but still i get this error. ====================================================================== Mon 05Oct09 19:36:06 This installation of RMagick 2.12.0 is configured for Ruby 1.8.7 (i486-linux) and ImageMagick 6.4.5 Q16 ====================================================================== make cc -I. -I. -I/usr/lib/ruby/1.8/i486-linux -I. -DRUBY_EXTCONF_H=\"extconf.h\" -I/usr/include/ImageMagick -fPIC -I/usr/include/ImageMagick -fopenmp -c rmmontage.c cc -I. -I. -I/usr/lib/ruby/1.8/i486-linux -I. -DRUBY_EXTCONF_H=\"extconf.h\" -I/usr/include/ImageMagick -fPIC -I/usr/include/ImageMagick -fopenmp -c rmutil.c cc -I. -I. -I/usr/lib/ruby/1.8/i486-linux -I. -DRUBY_EXTCONF_H=\"extconf.h\" -I/usr/include/ImageMagick -fPIC -I/usr/include/ImageMagick -fopenmp -c rmmain.c cc -I. -I. -I/usr/lib/ruby/1.8/i486-linux -I. -DRUBY_EXTCONF_H=\"extconf.h\" -I/usr/include/ImageMagick -fPIC -I/usr/include/ImageMagick -fopenmp -c rmimage.c rmimage.c: In function ‘Image_function_channel’: rmimage.c:5136: error: ‘MagickFunction’ undeclared (first use in this function) rmimage.c:5136: error: (Each undeclared identifier is reported only once rmimage.c:5136: error: for each function it appears in.) rmimage.c:5136: error: expected ‘;’ before ‘function’ rmimage.c:5152: error: ‘function’ undeclared (first use in this function) rmimage.c:5158: error: ‘PolynomialFunction’ undeclared (first use in this function) rmimage.c:5164: error: ‘SinusoidFunction’ undeclared (first use in this function) make: *** [rmimage.o] Error 1

    Read the article

  • echo command not found while testing selenium with phpunit

    - by Bill Szerdy
    I am getting an ERROR: Unknown command: 'echo' executing a selenium script with phpunit. Based on the output that echo command should be included in my version of PHPUnit. The selenium script does execute successfully in the firefox selenium IDE. mkdir_build: phpunit: [exec] PHPUnit 3.4.12 by Sebastian Bergmann. [exec] [exec] . [exec] TestFull [exec] E [exec] [exec] Time: 11 seconds, Memory: 6.50Mb [exec] [exec] There was 1 error: [exec] [exec] 1) TestFull::testNumberOne [exec] PHPUnit_Framework_Exception: Response from Selenium RC server for testComplete(). [exec] ERROR: Unknown command: 'echo'. [exec] [exec] [exec] /directory/to/tests/TestFull.php:14 [exec] [exec] FAILURES! [exec] Tests: 1, Assertions: 0, Errors: 1. And the RC server output: $ java -jar selenium-server.jar -port 4445 -debug 13:23:08.426 INFO - Java: Sun Microsystems Inc. 14.2-b01 13:23:08.428 INFO - OS: Linux 2.6.28-15-server i386 13:23:08.439 INFO - v2.0 [a2], with Core v2.0 [a2] 13:23:08.439 INFO - Selenium server running in debug mode. 13:25:05.661 DEBUG - ---------retrieving CommandQueue for sel_93352 13:25:05.662 DEBUG - Browser 2c8b3b5657a640db9fb278ecbd01049e/:top sel_93352 posted ERROR: Unknown command: 'echo' 13:25:05.662 DEBUG - ---------retrieving CommandQueue for sel_93352 13:25:05.662 DEBUG - putting command: ERROR: Unknown command: 'echo' 13:25:05.662 DEBUG - ..command put?: true 13:25:05.662 DEBUG - sel_93352 commandHolder sel_93352 getCommand() called 13:25:05.662 DEBUG - waiting for data for at most 10 more s 13:25:05.662 DEBUG - data from polling: ERROR: Unknown command: 'echo' 13:25:05.662 DEBUG - sel_93352 commandResultHolder sel_93352 getResult() -> ERROR: Unknown command: 'echo' 13:25:05.663 DEBUG - Got result: ERROR: Unknown command: 'echo' on session 2c8b3b5657a640db9fb278ecbd01049e 13:25:05.663 INFO - Got result: ERROR: Unknown command: 'echo' on session 2c8b3b5657a640db9fb278ecbd01049e 13:25:05.663 DEBUG - Handled by org.openqa.selenium.server.SeleniumDriverResourceHandler in HttpContext[/selenium-server,/selenium-server] 13:25:05.663 DEBUG - RESPONSE: HTTP/1.1 200 OK Date: Wed, 07 Apr 2010 20:25:05 GMT Server: Jetty/5.1.x (Linux/2.6.28-15-server i386 java/1.6.0_16 Cache-Control: no-cache Pragma: no-cache Expires: Thu, 01 Jan 1970 00:00:00 GMT Content-Type: text/plain Connection: close

    Read the article

  • Compiling x264 with mp4 support problem

    - by johnas
    I'm trying to compile x264 with mp4 output support. I download the latest version from their git by typing git clone git://git.videolan.org/x264.git When I run ./configure it configures and I'm able to make it. But when i try to configure it with ./configure --enable-mp4-output and then try to make it, it returns a strange error, indicating that there is a compilation error. The error message looks like: ... Lots of similar errors ... output/mp4.c:297: warning: implicit declaration of function ‘gf_isom_add_sample’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘p_file’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘i_track’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘i_descidx’ output/mp4.c:297: error: ‘mp4_hnd_t’ has no member named ‘p_sample’ output/mp4.c:299: error: ‘mp4_hnd_t’ has no member named ‘p_sample’ output/mp4.c:300: error: ‘mp4_hnd_t’ has no member named ‘i_numframe’ I've tried different releases. I've installed gpac, ffmpeg, and tried numerous tips from the net. But still can't get it to work. The reason I want it with mp4 output is because I want to use ffmpeg to create mp4 files encoded with x264. I'm running Ubuntu Server 9.10 32 bits.

    Read the article

  • rewrite not a member of LiftRules

    - by José Leal
    Hi guys, I was following http://www.assembla.com/wiki/show/liftweb/URL_Rewriting tutorial for url rewritting in liftweb.. but I get this error: error: value rewrite is not a member of object net.liftweb.http.LiftRules .. it is really odd.. and the documentation says that it exists. I'm using idea IDE, and I've done everything from scratch, using the lift maven blank archifact. Some more info: [INFO] ------------------------------------------------------------------------ [INFO] Building Joseph3 [INFO] task-segment: [tomcat:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing tomcat:run [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 0 resource [INFO] [yuicompressor:compress {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [INFO] artifact org.mortbay.jetty:jetty: checking for updates from scala-tools.org [INFO] artifact org.mortbay.jetty:jetty: checking for updates from central [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [scala:compile {execution: default}] [INFO] Checking for multiple versions of scala [INFO] /home/dpz/Scala/Doit/Joseph3/src/main/scala:-1: info: compiling [INFO] Compiling 2 source files to /home/dpz/Scala/Doit/Joseph3/target/classes at 1274922123910 [ERROR] /home/dpz/Scala/Doit/Joseph3/src/main/scala/bootstrap/liftweb/Boot.scala:16: error: value rewrite is not a member of object net.liftweb.http.LiftRules [INFO] LiftRules.rewrite.prepend(NamedPF("ProductExampleRewrite") { [INFO] ^ [ERROR] one error found [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1(Exit value: 1) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 19 seconds [INFO] Finished at: Thu May 27 03:02:07 CEST 2010 [INFO] Final Memory: 20M/175M [INFO] ------------------------------------------------------------------------ Process finished with exit code 1 enter code here

    Read the article

  • jQuery validate and tabs

    - by ntan
    Hi i am using jQuery validate for my form validation. The form is inside tabs.When i get an error i add an icon to the tab that error exist to be visual by the user So far so good. My problem is that after correct the error i can not remove the error in tab icon. I was assuming that validator is accessible via success but its not Assuming that the first tab (tab0) has 3 field for validation (field1,field2,field3) Here is the full code $("form#Form1") .validate({ invalidHandler: function(form, validator) { //TAB 0 if (validator.errorMap.field1 != "" && validator.errorMap.field2 != "" && validator.errorMap.field3 != "") { if ($("#tabs>ul>li").eq(0).find("img").length == 0) { $("#tabs>ul>li").eq(0).prepend("<img src=\"error.gif\">"); } } }, errorClass: "errorField", errorElement: "p", errorPlacement: function(error, element) { var parent = element.parent(); parent.addClass("error"); error.prependTo( parent ); }, //validator in not accessible via success //so my code its not working success: function(element,validator) { var parent = element.parent(); parent.removeClass("error"); $(parent).children("p.errorField").remove(); //TAB 0 if (validator.errorMap.field1 == "" && validator.errorMap.field2 == "" && validator.errorMap.field2 == "") { if ($("#tabs>ul>li").eq(0).find("img").length == 0) { $("#tabs>ul>li").eq(0).find("img").remove(); } } }, rules: { field1: { required: true }, field2: { required: true }, field3: { required: true } } }); Any suggestion is welcome.

    Read the article

  • Apps with lable XYZ could not be found

    - by HWM-Rocker
    Hello folks, today I ran into an error and have no clue how to fix it. Error: App with label XYZ could not be found. Are you sure your INSTALLED_APPS setting is correct? Where XYZ stands for the app-name that I am trying to reset. This error shows up every time I try to reset it (manage.py reset XYZ). Show all the sql code works. Even manage.py validate shows no error. I already commented out every single line of code in the models.py that I touched the last three monts. (function by function, model by model) And even if there are no models left I get this error. Here http://code.djangoproject.com/ticket/10706 I found a bugreport about this error. I also applied one the patches to allocate the error, it raises an exception so you have a trace back, but even there is no sign in what of my files the error occurred. I don't want to paste my code right now, because it is nearly 1000 lines of code in the file I edited the most. If someone of you had the same error please tell me were I can look for the problem. In that case I can post the important part of the source. Otherwise it would be too much at once. Thank you for helping!!!

    Read the article

  • Using libgrib2c in c++ application, linker error "Undefined reference to..."

    - by Rich
    EDIT: If you're going to be doing things with GRIB files I would recommend the GDAL library which is backed by the Open Source Geospatial Foundation. You will save yourself a lot of headache :) I'm using Qt creator in Ubuntu creating a c++ app. I am attempting to use an external lib, libgrib2c.a, that has a header grib2.h. Everything compiles, but when it tries to link I get the error: undefined reference to 'seekgb(_IO_FILE*, long, long, long*, long*) I have tried wrapping the header file with: extern "C"{ #include "grib2.h" } But it didn't fix anything so I figured that was not my problem. In the .pro file I have the line: include($${ROOT}/Shared/common/commonLibs.pri) and in commonLibs.pri I have: INCLUDEPATH+=$${ROOT}/external_libs/g2clib/include LIBS+=-L$${ROOT}/external_libs/g2clib/lib LIBS+=-lgrib2c I am not encountering an error finding the library. If I do a nm command on the libgrib2c.a I get: nm libgrib2c.a | grep seekgb seekgb.o: 00000000 T seekgb And when I run qmake with the additional argument of LIBS+=-Wl,--verbose I can find the lib file in the output: attempt to open /usr/lib/libgrib2c.so failed attempt to open /usr/lib/libgrib2c.a failed attempt to open /mnt/sdb1/ESMF/App/ESMF_App/../external_libs/linux/qwt_6.0.2/lib/libgrib2c.so failed attempt to open /mnt/sdb1/ESMF/App/ESMF_App/../external_libs/linux/qwt_6.0.2/lib/libgrib2c.a failed attempt to open ..//Shared/Config/lib/libgrib2c.so failed attempt to open ..//Shared/Config/lib/libgrib2c.a failed attempt to open ..//external_libs/libssh2/lib/libgrib2c.so failed attempt to open ..//external_libs/libssh2/lib/libgrib2c.a failed attempt to open ..//external_libs/openssl/lib/libgrib2c.so failed attempt to open ..//external_libs/openssl/lib/libgrib2c.a failed attempt to open ..//external_libs/g2clib/lib/libgrib2c.so failed attempt to open ..//external_libs/g2clib/lib/libgrib2c.a succeeded Although it doesn't show any of the .o files in the library is this because it is a c library in my c++ app? in the .cpp file that I am trying to use the library I have: #include "gribreader.h" #include <stdio.h> #include <stdlib.h> #include <external_libs/g2clib/include/grib2.h> #include <Shared/logging/Logger.hpp> //------------------------------------------------------------------------------ /// Opens a GRIB file from disk. /// /// This function opens the grib file and searches through it for how many GRIB /// messages are contained as well as their starting locations. /// /// \param a_filePath. The path to the file to be opened. /// \return True if successful, false if not. //------------------------------------------------------------------------------ bool GRIBReader::OpenGRIB(std::string a_filePath) { LOG(notification)<<"Attempting to open grib file: "<< a_filePath; if(isOpen()) { CloseGRIB(); } m_filePath = a_filePath; m_filePtr = fopen(a_filePath.c_str(), "r"); if(m_filePtr == NULL) { LOG(error)<<"Unable to open file: " << a_filePath; return false; } LOG(notification)<<"Successfully opened GRIB file"; g2int currentMessageSize(1); g2int seekPosition(0); g2int lengthToBeginningOfGrib(0); g2int seekLength(32000); int i(0); int iterationLimit(300); m_GRIBMessageLocations.clear(); m_GRIBMessageSizes.clear(); while(i < iterationLimit) { seekgb(m_filePtr, seekPosition, seekLength, &lengthToBeginningOfGrib, &currentMessageSize); if(currentMessageSize != 0) { LOG(verbose) << "Adding GRIB message location " << lengthToBeginningOfGrib << " with length " << currentMessageSize; m_GRIBMessageLocations.push_back(lengthToBeginningOfGrib); m_GRIBMessageSizes.push_back(currentMessageSize); seekPosition = lengthToBeginningOfGrib + currentMessageSize; LOG(verbose) << "GRIB seek position moved to " << seekPosition; } else { LOG(notification)<<"End of GRIB file found, after "<< i << " GRIB messages."; break; } } if(i >= iterationLimit) { LOG(warning) << "The iteration limit of " << iterationLimit << "was reached while searching for GRIB messages"; } return true; } And the header grib2.h is as follows: #ifndef _grib2_H #define _grib2_H #include<stdio.h> #define G2_VERSION "g2clib-1.4.0" #ifdef __64BIT__ typedef int g2int; typedef unsigned int g2intu; #else typedef long g2int; typedef unsigned long g2intu; #endif typedef float g2float; struct gtemplate { g2int type; /* 3=Grid Defintion Template. */ /* 4=Product Defintion Template. */ /* 5=Data Representation Template. */ g2int num; /* template number. */ g2int maplen; /* number of entries in the static part */ /* of the template. */ g2int *map; /* num of octets of each entry in the */ /* static part of the template. */ g2int needext; /* indicates whether or not the template needs */ /* to be extended. */ g2int extlen; /* number of entries in the template extension. */ g2int *ext; /* num of octets of each entry in the extension */ /* part of the template. */ }; typedef struct gtemplate gtemplate; struct gribfield { g2int version,discipline; g2int *idsect; g2int idsectlen; unsigned char *local; g2int locallen; g2int ifldnum; g2int griddef,ngrdpts; g2int numoct_opt,interp_opt,num_opt; g2int *list_opt; g2int igdtnum,igdtlen; g2int *igdtmpl; g2int ipdtnum,ipdtlen; g2int *ipdtmpl; g2int num_coord; g2float *coord_list; g2int ndpts,idrtnum,idrtlen; g2int *idrtmpl; g2int unpacked; g2int expanded; g2int ibmap; g2int *bmap; g2float *fld; }; typedef struct gribfield gribfield; /* Prototypes for unpacking API */ void seekgb(FILE *,g2int ,g2int ,g2int *,g2int *); g2int g2_info(unsigned char *,g2int *,g2int *,g2int *,g2int *); g2int g2_getfld(unsigned char *,g2int ,g2int ,g2int ,gribfield **); void g2_free(gribfield *); /* Prototypes for packing API */ g2int g2_create(unsigned char *,g2int *,g2int *); g2int g2_addlocal(unsigned char *,unsigned char *,g2int ); g2int g2_addgrid(unsigned char *,g2int *,g2int *,g2int *,g2int ); g2int g2_addfield(unsigned char *,g2int ,g2int *, g2float *,g2int ,g2int ,g2int *, g2float *,g2int ,g2int ,g2int *); g2int g2_gribend(unsigned char *); /* Prototypes for supporting routines */ extern double int_power(double, g2int ); extern void mkieee(g2float *,g2int *,g2int); void rdieee(g2int *,g2float *,g2int ); extern gtemplate *getpdstemplate(g2int); extern gtemplate *extpdstemplate(g2int,g2int *); extern gtemplate *getdrstemplate(g2int); extern gtemplate *extdrstemplate(g2int,g2int *); extern gtemplate *getgridtemplate(g2int); extern gtemplate *extgridtemplate(g2int,g2int *); extern void simpack(g2float *,g2int,g2int *,unsigned char *,g2int *); extern void compack(g2float *,g2int,g2int,g2int *,unsigned char *,g2int *); void misspack(g2float *,g2int ,g2int ,g2int *, unsigned char *, g2int *); void gbit(unsigned char *,g2int *,g2int ,g2int ); void sbit(unsigned char *,g2int *,g2int ,g2int ); void gbits(unsigned char *,g2int *,g2int ,g2int ,g2int ,g2int ); void sbits(unsigned char *,g2int *,g2int ,g2int ,g2int ,g2int ); int pack_gp(g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *, g2int *); #endif /* _grib2_H */ I have been scratching my head for two days on this. If anyone has an idea on what to do or can point me in some sort of direction, I'm stumped. Also, if you have any comments on how I can improve this post I'd love to hear them, kinda new at this posting thing. Usually I'm able to find an answer in the vast stores of knowledge already contained on the web.

    Read the article

  • Faster quadrature decoder loops with Python code

    - by Kelei
    I'm working with a BeagleBone Black and using Adafruit's IO Python library. Wrote a simple quadrature decoding function and it works perfectly fine when the motor runs at about 1800 RPM. But when the motor runs at higher speeds, the code starts missing some of the interrupts and the encoder counts start to accumulate errors. Do you guys have any suggestions as to how I can make the code more efficient or if there are functions which can cycle the interrupts at a higher frequency. Thanks, Kel Here's the code: # Define encoder count function def encodercount(term): global counts global Encoder_A global Encoder_A_old global Encoder_B global Encoder_B_old global error Encoder_A = GPIO.input('P8_7') # stores the value of the encoders at time of interrupt Encoder_B = GPIO.input('P8_8') if Encoder_A == Encoder_A_old and Encoder_B == Encoder_B_old: # this will be an error error += 1 print 'Error count is %s' %error elif (Encoder_A == 1 and Encoder_B_old == 0) or (Encoder_A == 0 and Encoder_B_old == 1): # this will be clockwise rotation counts += 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) elif (Encoder_A == 1 and Encoder_B_old == 1) or (Encoder_A == 0 and Encoder_B_old == 0): # this will be counter-clockwise rotation counts -= 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) else: #this will be an error as well error += 1 print 'Error count is %s' %error Encoder_A_old = Encoder_A # store the current encoder values as old values to be used as comparison in the next loop Encoder_B_old = Encoder_B # Initialize the interrupts - these trigger on the both the rising and falling GPIO.add_event_detect('P8_7', GPIO.BOTH, callback = encodercount) # Encoder A GPIO.add_event_detect('P8_8', GPIO.BOTH, callback = encodercount) # Encoder B # This is the part of the code which runs normally in the background while True: time.sleep(1)

    Read the article

  • C++: Maybe you know this fitfall?

    - by Martijn Courteaux
    Hi, I'm developing a game. I have a header GameSystem (just methods like the game loop, no class) with two variables: int mouseX and int mouseY. These are updated in my game loop. Now I want to access them from Game.cpp file (a class built by a header-file and the source-file). So, I #include "GameSystem.h" in Game.h. After doing this I get a lot of compile errors. When I remove the include he says of course: Game.cpp:33: error: ‘mouseX’ was not declared in this scope Game.cpp:34: error: ‘mouseY’ was not declared in this scope Where I want to access mouseX and mouseY. All my .h files have Header Guards, generated by Eclipse. I'm using SDL and if I remove the lines that wants to access the variables, everything compiles and run perfectly (*). I hope you can help me... This is the error-log when I #include "GameSystem.h" (All the code he is refering to works, like explained by the (*)): In file included from ../trunk/source/domein/Game.h:14, from ../trunk/source/domein/Game.cpp:8: ../trunk/source/domein/GameSystem.h:30: error: expected constructor, destructor, or type conversion before ‘*’ token ../trunk/source/domein/GameSystem.h:46: error: variable or field ‘InitGame’ declared void ../trunk/source/domein/GameSystem.h:46: error: ‘Game’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: ‘g’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘char’ ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘bool’ ../trunk/source/domein/FPS.h:46: warning: ‘void FPS_SleepMilliseconds(int)’ defined but not used This is the code which try to access the two variables: SDL_Rect pointer; pointer.x = mouseX; pointer.y = mouseY; pointer.w = 3; pointer.h = 3; SDL_FillRect(buffer, &pointer, 0xFF0000);

    Read the article

  • C++: Maybe you know this pitfall?

    - by Martijn Courteaux
    Hi, I'm developing a game. I have a header GameSystem (just methods like the game loop, no class) with two variables: int mouseX and int mouseY. These are updated in my game loop. Now I want to access them from Game.cpp file (a class built by a header-file and the source-file). So, I #include "GameSystem.h" in Game.h. After doing this I get a lot of compile errors. When I remove the include he says of course: Game.cpp:33: error: ‘mouseX’ was not declared in this scope Game.cpp:34: error: ‘mouseY’ was not declared in this scope Where I want to access mouseX and mouseY. All my .h files have Header Guards, generated by Eclipse. I'm using SDL and if I remove the lines that wants to access the variables, everything compiles and run perfectly (*). I hope you can help me... This is the error-log when I #include "GameSystem.h" (All the code he is refering to works, like explained by the (*)): In file included from ../trunk/source/domein/Game.h:14, from ../trunk/source/domein/Game.cpp:8: ../trunk/source/domein/GameSystem.h:30: error: expected constructor, destructor, or type conversion before ‘*’ token ../trunk/source/domein/GameSystem.h:46: error: variable or field ‘InitGame’ declared void ../trunk/source/domein/GameSystem.h:46: error: ‘Game’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: ‘g’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘char’ ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘bool’ ../trunk/source/domein/FPS.h:46: warning: ‘void FPS_SleepMilliseconds(int)’ defined but not used This is the code which try to access the two variables: SDL_Rect pointer; pointer.x = mouseX; pointer.y = mouseY; pointer.w = 3; pointer.h = 3; SDL_FillRect(buffer, &pointer, 0xFF0000);

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >