Search Results

Search found 4561 results on 183 pages for 'production'.

Page 150/183 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Tomcat Application Generating too many logs

    - by rohitgu
    Hi, I have an application which runs on tomcat 6.0.20 server on linux ubuntu server. It generates a huge amount of logs in the catalina.out folder, most of these are generated while using the application, but are not generated by the application. Some of the logs it generates are given below, Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: startElement(,,mime-type) Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: Pushing body text ' ' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: New match='web-app/mime-mapping/mime-type' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester startElement FINE: Fire begin() for CallParamRule[paramIndex=1, attributeName=null, from stack=false] Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester characters FINE: characters(audio/x-mpeg) Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: endElement(,,mime-type) Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: match='web-app/mime-mapping/mime-type' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: bodyText='audio/x-mpeg' Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: Fire body() for CallParamRule[paramIndex=1, attributeName=null, from stack=false] Apr 16, 2010 2:55:24 PM org.apache.tomcat.util.digester.Digester endElement FINE: Popping body text ' How can I turn them off? This is very important, since this a production application. Regards, Rohit

    Read the article

  • Possible to recover mysql root pass with sudo server access?

    - by jonathonmorgan
    I've inherited development for a website on vps hosting, and have login info for a user with sudo privileges, but don't have the password for the mysql root user. After digging around a little, it looks like the only way to fix this is to stop mysql (something like this: http://waoewaoe.wordpress.com/2010/02/03/recover-reset-mysql-root-password/). But because the website it's serving is currently in production, I'm hoping you guys can enlighten me to any potential consequences (or let me know if there's typically a file where the password would be accessible). a) during the time mysql is stopped, information in the database won't be accessible, right -- even by other users? b) will resetting the root password have any impact on other users after mysql has restarted? Will their username/passwords still be valid? The current application is using an account with limited privileges to read/write to the database, and while 5min downtime in the middle of the night would probably go unnoticed, half a day while I tie up loose ends/figure out what I screwed up will land in me hot water. Thanks in advance for your help!

    Read the article

  • How to write to a varchar(max) column using ODBC

    - by andyjohnson
    Summary: I'm trying to write a text string to a column of type varchar(max) using ODBC and SQL Server 2005. It fails if the length of the string is greater than 8000. Help! I have some C++ code that uses ODBC (SQL Native Client) to write a text string to a table. If I change the column from, say, varchar(100) to varchar(max) and try to write a string with length greater than 8000, the write fails with the following error [Microsoft][ODBC SQL Server Driver]String data, right truncation So, can anyone advise me on if this can be done, and how? Some example (not production) code that shows what I'm trying to do: SQLHENV hEnv = NULL; SQLRETURN iError = SQLAllocEnv(&hEnv); HDBC hDbc = NULL; SQLAllocConnect(hEnv, &hDbc); const char* pszConnStr = "Driver={SQL Server};Server=127.0.0.1;Database=MyTestDB"; UCHAR szConnectOut[SQL_MAX_MESSAGE_LENGTH]; SWORD iConnectOutLen = 0; iError = SQLDriverConnect(hDbc, NULL, (unsigned char*)pszConnStr, SQL_NTS, szConnectOut, (SQL_MAX_MESSAGE_LENGTH-1), &iConnectOutLen, SQL_DRIVER_COMPLETE); HSTMT hStmt = NULL; iError = SQLAllocStmt(hDbc, &hStmt); const char* pszSQL = "INSERT INTO MyTestTable (LongStr) VALUES (?)"; iError = SQLPrepare(hStmt, (SQLCHAR*)pszSQL, SQL_NTS); char* pszBigString = AllocBigString(8001); iError = SQLSetParam(hStmt, 1, SQL_C_CHAR, SQL_VARCHAR, 0, 0, (SQLPOINTER)pszBigString, NULL); iError = SQLExecute(hStmt); // Returns SQL_ERROR if pszBigString len > 8000 The table MyTestTable contains a single colum defined as varchar(max). The function AllocBigString (not shown) creates a string of arbitrary length. I understand that previous versions of SQL Server had an 8000 character limit to varchars, but not why is this happening in SQL 2005? Thanks, Andy

    Read the article

  • Deploying .NET COM dll, getting error (0x80070002)

    - by Brett
    I have a .NET COM assembly I am attempting to deploy to a web server (IIS 6 Win 2003). We have successfully deployed this assembly to our test environment, but the production environment is not working. The assembly is being called from a classic ASP page. Every time that page tries to initialize the assembly with “Set LTMRender = CreateObject("LTMRender.Render")”, I get an error “Error Type:, (0x80070002)”. This error seems to indicate a permission denied, or file not found type problem. I created a test app to see if the assembly works outside of the web page. The .exe initializes the assembly, and then makes a call designed to fail which in turn causes the assembly to produce a log file. It works if I run the .exe in the same folder as the assembly, but fails if I run it elsewhere. For some reason, the assembly is not accessible from outside it’s folder. I can’t figure out why this won’t work. Things I have confirmed: The deployment folder has adequate permissions. We have confirmed that the folder the assembly in installed in has the correct permissions for all the necessary user accounts. The Assembly is signed with a strong name, and was registered with regasm.exe C:_WebSites\LTMRender\LTMRender.dll /codebase /tlb:C:_WebSites\LTMRender\LTMRender.tlb. Regasm reported success. The Assembly has the attribute and relevant GUID’s set correctly. Any tips? EDIT We ran filemon against my testapp.exe and it seems to have indicated what the problem is. When testapp.exe runs in D:_websites\DocWebV2\ or D:_websites\DocWebV2\ LTMRender\ folder, it succeeds and filemon is showing D:_websites\DocWebV2\LTMRender\pinPDF.dll SUCCESS If I run my testapp.exe in the D:_websites\DocWebV2\Client – where my asp pages run, it shows D:_websites\DocWebV2\pinPDF.dll NAME NOT FOUND and then D:_websites\DocWebV2\pinPDF\pinPDF.dll FILE NOT FOUND I’m not sure why it is not looking in the correct folder if it’s under this particular folder only.

    Read the article

  • Modelling Business Logic with NON-Techies

    - by cbmeeks
    The setup: Winform/ASP.NET MVC projects. Learning NHibernate SQL-Server driven apps I work with clients that have no idea how to model an application. That's what I'm for. However, we have lots of conflicts with validation, mis-understandings, etc. For example, the client will ask for an order entry screen. The screen should require a "product". That's fine and dandy. However, the client didn't know to tell me that the user can't order a product of "Class A" unless it's Tuesday. Or, they need a time entry screen. 2 days before it's rolled into production, they casually forgot to mention that certain activities are only valid for certain situations. These situations being a week of coding. That's of course, some crude examples (not by much!). But the problem is getting these non-technical clients to layout their business logic. They somehow didn't realize that the "Class A" problem would come up two weeks later, etc. I'm all for agile programming but is there an easy way to somehow make business logic like this extremely easy to implement and change on almost a daily basis? I of course am splitting the project into hopefully intelligent pieces, using NHibernate, etc. But making this BI logic so dynamic is really making it hard to project timelines, etc. Any suggestions? I know there will never be a perfect client (or a perfect provider) but how do you guys deal with the constant mis-understandings? Thanks.

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • Bootstrapper (setup.exe) says ".NET 3.5 not found" but launching .msi directly installs application

    - by Marek
    Our installer generates a bootstrapper (setup.exe) and a MSI file - a pretty common scenario. One of the production machines reports a strange problem during install: If the user launches the bootstrapper (setup.exe), it reports that .NET 3.5 is not installed. This happens with account under administator group. No matter if they launch it as administrator or not, same behavior. the application installs fine when application.msi or OurInstallLauncher.exe (see below for explanation) is started directly no matter if run as administrator is applied. We have checked that .NET is installed on the machine (both 64bit and 32bit "versions" = under both C:\Windows\Microsoft.NET\Framework64 and C:\Windows\Microsoft.NET\Framework there is a folder named v3.5. This happens on a 64 bit Windows 7. I can not reproduce it on my development 64 bit Windows 7. On Windows XP and Vista, it has worked without any problem for a long time so far. Part of our build script that declares the GenerateBootStrapper task (nothing special): <ItemGroup> <BootstrapperFile Include="Microsoft.Windows.Installer.3.1"> <ProductName>Microsoft Windows Installer 3.1</ProductName> </BootstrapperFile> <BootstrapperFile Include="Microsoft.Net.Framework.3.5"> <ProductName>Microsoft .NET Framework 3.5</ProductName> </BootstrapperFile> </ItemGroup> <GenerateBootstrapper ApplicationFile=".\Files\OurInstallLauncher.exe" ApplicationName="App name" Culture="en" ComponentsLocation ="HomeSite" CopyComponents="True" Validate="True" BootstrapperItems="@(BootstrapperFile)" OutputPath="$(OutSubDir)" Path="$(SdkBootstrapperPath)" /> Note: OurInstallLauncher.exe is language selector that applies a transform to the msi based on user selection. This is not relevant to the question at all because the installer never gets as far as launching this exe! It displays that .NET 3.5 is missing right after starting setup.exe. Has anyone seen this behavior before?

    Read the article

  • ASP.NET application using old connection string.

    - by Doug S.
    I am trying to publish a website using ASP.NET MVC3 EF and CODEFIRST with a SQL Server 2008 backend. On my local machine I was using a sql express db for development, but now that I am pushing live, I want to use my hosted production database. The problem is that when I try to run the application, it is still using my local db connection string. I have completely removed the old connection string from my web.config file and am using the <clear /> tag before creating the new connection string. I have also cleaned the solution and rebuilt, but somehow it is still connecting to the old db. What am I missing? This is the new connection string: <connectionStrings> <clear /> <add name="CellularAutomataDBContext" connectionString=" Server=XXX; Database=CellularAutomata; User ID=XXX; Password=XXX; Trusted_Connection=False" providerName="System.Data.SqlClient" /> </connectionStrings>

    Read the article

  • Eclipse Maven web application - can not run on server anymore

    - by wuntee
    I have an maven eclipse webapp project that I was able to right click and 'Run on server' and it would deploy on tomcat. I recently did a 'maven - Update project conifgurations' and I now can NOT deploy and run the project as a webapp. Has anyone seen this before? The only output from tomcat is as follows - it doesnt even look like its trying to deploy the application. Apr 14, 2010 3:58:54 PM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: .:/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java Apr 14, 2010 3:58:54 PM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.j2ee.server:taac-web' did not find a matching property. Apr 14, 2010 3:58:54 PM org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Apr 14, 2010 3:58:54 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 402 ms Apr 14, 2010 3:58:54 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Apr 14, 2010 3:58:54 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.24 Apr 14, 2010 3:58:54 PM org.apache.coyote.http11.Http11Protocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Apr 14, 2010 3:58:54 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Apr 14, 2010 3:58:54 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/14 config=null Apr 14, 2010 3:58:54 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 247 ms

    Read the article

  • Avoiding shutdown hook

    - by meryl
    Through the following code I can play and cut and audio file. Is there any other way to avoid using a shutdown hook? The problem is that whenever I push the cut button , the file doesn't get saved until I close the application thanks ...................... void play_cut() { try { // First, we get the format of the input file final AudioFileFormat.Type fileType = AudioSystem.getAudioFileFormat(inputAudio).getType(); // Then, we get a clip for playing the audio. c = AudioSystem.getClip(); // We get a stream for playing the input file. AudioInputStream ais = AudioSystem.getAudioInputStream(inputAudio); // We use the clip to open (but not start) the input stream c.open(ais); // We get the format of the audio codec (not the file format we got above) final AudioFormat audioFormat = ais.getFormat(); // We add a shutdown hook, an anonymous inner class. Runtime.getRuntime().addShutdownHook(new Thread() { public void run() { // We're now in the hook, which means the program is shutting down. // You would need to use better exception handling in a production application. try { // Stop the audio clip. c.stop(); // Create a new input stream, with the duration set to the frame count we reached. Note that we use the previously determined audio format AudioInputStream startStream = new AudioInputStream(new FileInputStream(inputAudio), audioFormat, c.getLongFramePosition()); // Write it out to the output file, using the same file type. AudioSystem.write(startStream, fileType, outputAudio); } catch(IOException e) { e.printStackTrace(); } } }); // After setting up the hook, we start the clip. c.start(); } catch (UnsupportedAudioFileException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (LineUnavailableException e) { e.printStackTrace(); } }// end play_cut ......................

    Read the article

  • SubSonic 3.0 Simple Repository Adding a DateTime Property To An Object

    - by Blounty
    I am trying out SubSonic to see if it is viable to use on production projects. I seem to have stumbled upon an issue whith regards to updating the database with default values (String and DateTime) when a new column is created. If a new property of DateTime or String is added to an object. public class Bug { public int BugId { get; set; } public string Title { get; set; } public string Overview { get; set; } public DateTime TrackedDate { get; set; } public DateTime RemovedDate { get; set; } } When the code to add that type of object to the database is run var repository = new SimpleRepository(SimpleRepositoryOptions.RunMigrations); repository.Add(new Bug() { Title = "A Bug", Overview = "An Overview", TrackedDate = DateTime.Now }); it creates the following sql: UPDATE Bugs SET RemovedDate=''01/01/1900 00:00:00'' For some reason it is adding double 2 single quotes to each end of the string or DateTime. This is causing the following error: System.Data.SqlClient.SqlException - Incorrect syntax near '01' I am connecting to SQL Server 2005 Any help would be appreicated as apart from this issue i am finding SubSonic to be a great product. I have created a screen cast of my error here:

    Read the article

  • Setting up ehcache replication - what multicast settings do I need?

    - by Darren Greaves
    I am trying to set up ehcache replication as documented here: http://ehcache.sourceforge.net/EhcacheUserGuide.html#id.s22.2 This is on a Windows machine but will ultimately run on Solaris in production. The instructions say to set up a provider as follows: <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446, timeToLive=32"/> And a listener like this: <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="hostName=localhost, port=40001, socketTimeoutMillis=2000"/> My questions are: Are the multicast IP address and port arbitrary (I know the address has to live within a specific range but do they have to be specific numbers)? Do they need to be set up in some way by our system administrator (I am on an office network)? I want to test it locally so am running two separate tomcat instances with the above config. What do I need to change in each one? I know both the listeners can't listen on the same port - but what about the provider? Also, are the listener ports arbitrary too? I've tried setting it up as above but in my testing the caches don't appear to be replicated - the value added in one tomcat's cache is not present in the other cache. Is there anything I can do to debug this situation (other than packet sniffing)? Thanks in advance for any help, been tearing my hair out over this one!

    Read the article

  • Alternatives to using web.config to store settings (for complex solutions)

    - by Brian MacKay
    In our web applications, we seperate our Data Access Layers out into their own projects. This creates some problems related to settings. Because the DAL will eventually need to be consumed from perhaps more than one application, web.config does not seem like a good place to keep the connection strings and some of the other DAL-related settings. To solve this, on some of our recent projects we introduced a third project just for settings. We put the setting in a system of .Setting files... With a simple wrapper, the ability to have different settings for various enviroments (Dev, QA, Staging, Production, etc) was easy to achieve. The only problem there is that the settings project (including the .Settings class) compiles into an assembly, so you can't change it without doing a build/deployment, and some of our customers want to be able to configure their projects without Visual Studio. So, is there a best practice for this? I have that sense that I'm reinventing the wheel. Some solutions such as storing settings in a fixed directory on the server in, say, our own XML format occurred to us. But again, I would rather avoid having to re-create encryption for sensitive values and so on. And I would rather keep the solution self-contained if possible. EDIT: The original question did not contain the really penetrating reason that we can't (I think) use web.config ... That puts a few (very good) answers out of context, my bad.

    Read the article

  • Google App Engine - Caching generated HTML

    - by Alexander
    I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages. The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage. However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated. I am open to any suggestions on this approach, or other approaches for caching generated html. Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires). Kind Regards, and thank you in advance for any suggestions you may be able to offer. -Alex

    Read the article

  • Syncing data between devel/live databases in Django

    - by T. Stone
    With Django's new multi-db functionality in the development version, I've been trying to work on creating a management command that let's me synchronize the data from the live site down to a developer machine for extended testing. (Having actual data, particularly user-entered data, allows me to test a broader range of inputs.) Right now I've got a "mostly" working command. It can sync "simple" model data but the problem I'm having is that it ignores ManyToMany fields which I don't see any reason for it do so. Anyone have any ideas of either how to fix that or a better want to handle this? Should I be exporting that first query to a fixture first and then re-importing it? from django.core.management.base import LabelCommand from django.db.utils import IntegrityError from django.db import models from django.conf import settings LIVE_DATABASE_KEY = 'live' class Command(LabelCommand): help = ("Synchronizes the data between the local machine and the live server") args = "APP_NAME" label = 'application name' requires_model_validation = False can_import_settings = True def handle_label(self, label, **options): # Make sure we're running the command on a developer machine and that we've got the right settings db_settings = getattr(settings, 'DATABASES', {}) if not LIVE_DATABASE_KEY in db_settings: print 'Could not find "%s" in database settings.' % LIVE_DATABASE_KEY return if db_settings.get('default') == db_settings.get(LIVE_DATABASE_KEY): print 'Data cannot synchronize with self. This command must be run on a non-production server.' return # Fetch all models for the given app try: app = models.get_app(label) app_models = models.get_models(app) except: print "The app '%s' could not be found or models could not be loaded for it." % label for model in app_models: print 'Syncing %s.%s ...' % (model._meta.app_label, model._meta.object_name) # Query each model from the live site qs = model.objects.all().using(LIVE_DATABASE_KEY) # ...and save it to the local database for record in qs: try: record.save(using='default') except IntegrityError: # Skip as the record probably already exists pass

    Read the article

  • ColdFusion blank page in IE7 on refresh?

    - by richardtallent
    I'm new to ColdFusion, have a very basic problem that's really slowing me down. I'm making edits in a text editor and refreshing the page in web browsers for testing. Standard web dev stuff, no browser-sniffing, redirection, or other weirdness, and no proxies involved. When I refresh the page in Chrome or Firefox, everything works fine, but when I refresh in IE7, I get a blank page. View Source shows me: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META http-equiv=Content-Type content="text/html; charset=utf-8"></HEAD> <BODY></BODY></HTML> That's it. While I am rendering to the transitional DTD, the real head contains a title, etc. My development server is CF 9, production is 8. This problem has been happening in both. Seems it may only be happening on pages that are the the result of a POST action. I've never experienced this in ASP.NET (my usual development environment) using the same browsers.

    Read the article

  • ruby on rails delajed_job failing with rvm

    - by mottalrd
    I have delayed_job installed and I start the daemon to run the jobs with this ruby script require 'rubygems' require 'daemon_spawn' $: << '.' RAILS_ROOT = File.expand_path(File.join(File.dirname(__FILE__), '..')) class DelayedJobWorker < DaemonSpawn::Base def start(args) ENV['RAILS_ENV'] ||= args.first || 'development' Dir.chdir RAILS_ROOT require File.join('config', 'environment') Delayed::Worker.new.start end def stop system("kill `cat #{RAILS_ROOT}/tmp/pids/delayed_job.pid`") end end DelayedJobWorker.spawn!(:log_file => File.join(RAILS_ROOT, "log", "delayed_job.log"), :pid_file => File.join(RAILS_ROOT, 'tmp', 'pids', 'delayed_job.pid'), :sync_log => true, :working_dir => RAILS_ROOT) If I run the command with rvmsudo it works perfectly If I simply use the ruby command without rvm it fails and this is the output, but I have no idea why this happens. Could you give me some clue? Thank you user@mysystem:~/redeal.it/application$ ruby script/delayed_job start production /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:16:in `kill': Operation not permitted (Errno::EPERM) from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:16:in `alive?' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:125:in `alive?' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:176:in `block in start' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:176:in `select' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:176:in `start' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:165:in `spawn!' from script/delayed_job:37:in `<main>'

    Read the article

  • Problem with Zend Project

    - by Fincha
    Hello, i write a script and it works perfectly, on my local server. I have uploaded it on my server and now I getting this Problem Parse error: syntax error, unexpected '{' in /homepages/46/d319011794/htdocs/suche/public/index.php on line 18 an here is my index.php <?php error_reporting(E_ALL || E_STRICT); define('APPLICATION_PATH', realpath(dirname(__FILE__)) . '/../application'); set_include_path( APPLICATION_PATH . '/../library' . PATH_SEPARATOR . '../application/models' . PATH_SEPARATOR . get_include_path() ); require_once 'Zend/Loader.php'; Zend_Loader::registerAutoload(); new App_Connect(); try { //Line 18 require '../application/bootstrap.php'; } catch(Exception $exception) { echo "<html><body>Fehler beim bootstraping"; if(defined('APPLICATION_ENVIROMENT') && APPLICATION_EVIROMENT != 'production') { echo "<br><br>" . $exception->getMessage() . "<br>" . "<div align='left'>Stack Trace: " . "<pre> " . $exception->getTraceAsString() . "</pre></div>"; } echo "</body></html>"; exit(1); } Zend_Controller_Front::getInstance()->dispatch(); This is a Zend Project... so may be some one know what to do...

    Read the article

  • Looking for a good dev environment for OSGi bundles

    - by Riduidel
    Hi, I'm currently investigating in the field of dev environment for OSGi bundles. My goal is to find a way to develop, test and debug with ease the bundles I'll be coding. Besides, I have some "cultural" requirements. I want to be able to use java continuous integration servers (typically, Hudson) As a consequence of that first requirement, I want to have a repeatable, one-click build process. My typical tool for that is maven. And finally, being long-term Eclipse user, and having the m2eclipse at hand to merge my eclipse env with my maven one, I obviously want to be able to test and debug with that IDE. So far, here are the infos I know I can use (and have already tested) maven-bundle-plugin, maven-ipojo-plugin which both offer clean packaging facilities I have tested maven pax (and eclipse pax) and am not really satisfied with both : maven pax generates a very heavy project, where adding dependencies is very error-prone (the maven pax:import-bundle command line, with all its arguments, is a hell per se) I have taken a look at Karaf, which seems to have some nice direct maven provisionning, but I don't know how to integrate it with my Eclipse, besides using the traditionnal JPDA bridge. However, it seems to be more production-oriented than dev-oriented, and as such may require heavy configuration to fit my need (although the reading of its user manual doesn't revedal that). Have you got any ideas ? Some maven/eclipse plugins ?

    Read the article

  • Testing a patch to the Rails mysql adapter

    - by Sleepycat
    I wrote a little monkeypatch to the Rails MySQLAdapter and want to package it up to use it in my other projects. I am trying to write some tests for it but I am still new to testing and I am not sure how to test this. Can someone help get me started? Here is the code I want to test: unless RAILS_ENV == 'production' module ActiveRecord module ConnectionAdapters class MysqlAdapter < AbstractAdapter def select_with_explain(sql, name = nil) explanation = execute_with_disable_logging('EXPLAIN ' + sql) e = explanation.all_hashes.first exp = e.collect{|k,v| " | #{k}: #{v} "}.join log(exp, 'Explain') select_without_explain(sql, name) end def execute_with_disable_logging(sql, name = nil) #:nodoc: #Run a query without logging @connection.query(sql) rescue ActiveRecord::StatementInvalid => exception if exception.message.split(":").first =~ /Packets out of order/ raise ActiveRecord::StatementInvalid, "'Packets out of order' error was received from the database. Please update your mysql bindings (gem install mysql) and read http://dev.mysql.com/doc/mysql/en/password-hashing.html for more information. If you're on Windows, use the Instant Rails installer to get the updated mysql bindings." else raise end end alias_method_chain :select, :explain end end end end Thanks.

    Read the article

  • searching for a programming platform with hot code swap

    - by Andreas
    I'm currently brainstorming over the idea how to upgrade a program while it is running. (Not while debugging, a "production" system.) But one thing that is required for it, is to actually submit the changed source code or compiled byte code into the running process. Pseudo Code var method = typeof(MyClass).GetMethod("Method1"); var content = //get it from a database (bytecode or source code) SELECT content FROM methods WHERE id=? AND version=? method.SetContent(content); At first, I want to achieve the system to work without the complexity of object-orientation. That leads to the following requirements: change source code or byte code of function drop functions add new functions change the signature of a function With .NET (and others) I could inject a class via an IoC and could thus change the source code. But the loading would be cumbersome, because everything has to be in an Assembly or created via Emit. Maybe with Java this would be easier? The whole ClassLoader is replacable, I think. With JavaScript I could achieve many of the goals. Simply eval a new function (MyMethod_V25) and assign it to MyClass.prototype.MyMethod. I think one can also drop functions somehow with "del" Which general-purpose platform can handle such things?

    Read the article

  • How can I display an ASP.NET MVC html part from one application in another

    - by Frank Sessions
    We have several asp.net MVC apps in the following setup SecurityApp (root application - handles forms auth for SSO and has a profile edit page) Application1 (virtual directory) Application2 (virtual directory) Application3 (virtual directory) so that domain.com points to SecurityApp and domain.com/Application1 etc point to their associated virtual directories. All of our Single Sign On (SSO) is working properly using forms authentication. Based on the users permissions when logging in a menu that lists their available applications and a logout link will be generated and saved in the cache - this menu displays fine whenever the user is in the SecurityApp (editing their profile) but we cannot figure out how to get the Applications in the virtual directories to display the same application menu. We have tried: 1) Using JSONP to do an request that will return the html for the menu. The ajax call returns the HTML with the html; however, because User.IsAuthenticated is false the menu comes back empty. 2) We created a user control and include it along with the dll's for the SecurityApp project and this works; however, we dont want to have to include all the dlls for the SecurityApp project in every application that we create (along with all the app settings in the web.config) We would like this to be as simple as possible to implement so that anyone creating a new app can add the menu to their application in as few steps as possible... Any ideas? To Clarify - we are using ASP.NET MVC 1.0 since these apps are in production and we do not have the okay to go to ASP.NET MVC 2.0 (unfortunately)

    Read the article

  • Strange File Upload issue with asp.net site on a web farm

    - by Coov
    I have a basic asp.net file upload page. When I test file uploads from my local machine, it works fine. When I test file uploads from our dev machine, it works fine. When I deploy the site to our production webfarm, it behaves strangely. If I access the site from off the network, I can load file-after-file without issue. If I access the site from within our network, I can load the first file just fine but any subsequent files result it a bad sequence of commands error. I'm not sure if this is web farm issue, a network issue, or something else. It feels like a connection is not being disposed of properly but it doesn't make sense why everything works fine remotely. Markup: <asp:FileUpload ID="FileUpload1" runat="server" Width="350px" /> <asp:Button ID="btnSubmit" runat="server" Text="Upload" onclick="btnSubmit_Click" /> Code: if (FileUpload1.HasFile) { FtpWebRequest ftpRequest; FtpWebResponse ftpResponse; ftpRequest = (FtpWebRequest)FtpWebRequest.Create(new Uri("ftp://ftp.myftpsite.com/" + FileUpload1.FileName)); ftpRequest.Method = WebRequestMethods.Ftp.UploadFile; ftpRequest.Proxy = null; ftpRequest.UseBinary = true; ftpRequest.Credentials = new NetworkCredential("username", "password"); ftpRequest.KeepAlive = false; byte[] fileContents = new byte[FileUpload1.PostedFile.ContentLength]; using (Stream fr = FileUpload1.PostedFile.InputStream) { fr.Read(fileContents, 0, FileUpload1.PostedFile.ContentLength); } using (Stream writer = ftpRequest.GetRequestStream()) { writer.Write(fileContents, 0, fileContents.Length); } ftpResponse = (FtpWebResponse)ftpRequest.GetResponse(); Response.Write(ftpResponse.StatusDescription); }

    Read the article

  • Child sProc cannot reference a Local temp table created in parent sProc

    - by John Galt
    On our production SQL2000 instance, we have a database with hundreds of stored procedures, many of which use a technique of creating a #TEMP table "early" on in the code and then various inner stored procedures get EXECUTEd by this parent sProc. In SQL2000, the inner or "child" sProc have no problem INSERTing into #TEMP or SELECTing data from #TEMP. In short, I assume they can all refer to this #TEMP because they use the same connection. In testing with SQL2008, I find 2 manifestations of different behavior. First, at design time, the new "intellisense" feature is complaining in Management Studio EDIT of the child sProc that #TEMP is an "invalid object name". But worse is that at execution time, the invoked parent sProc fails inside the nested child sProc. Someone suggested that the solution is to change to ##TEMP which is apparently a global temporary table which can be referenced from different connections. That seems too drastic a proposal both from the amount of work to chase down all the problem spots as well as possible/probable nasty effects when these sProcs are invoked from web applications (i.e. multiuser issues). Is this indeed a change in behavior in SQL2005 or SQL2008 regarding #TEMP (local temp tables)? We skipped 2005 but I'd like to learn more precisely why this is occuring before I go off and try to hack out the needed fixes. Thanks.

    Read the article

  • Sinatra application running on Dreamhost suddenly not working

    - by jbrennan
    My Sinatra application was running fine on Dreamhost until a few days ago (I'm not sure precisely when it went bad). Now when I visit my app I get this error: can't activate rack (~> 1.1, runtime) for ["sinatra-1.1.2"], already activated rack-1.2.1 for [] I have no idea how to fix this. I've tried updating all my gems, then touching the app/tmp/restart.txt file, but still no fix. I hadn't touched any files of my app, nor my Dreamhost account. It just busted on its own (my guess is DH changed something on their server which caused the bust). When I originally deployed my app, I had to go through some hoops to get it working, and I seem to think I was using gems in a custom location, but I can't remember exactly where or how. I don't know my way around Rack/Passenger very well. Here's my config.ru: (mostly grafted from around the web, I don't fully understand it) ENV['RACK_ENV'] = 'development' if ENV['RACK_ENV'].empty? #### Make sure my own gem path is included first ENV['GEM_HOME'] = "#{ENV['HOME']}/.gems" ENV['GEM_PATH'] = "#{ENV['HOME']}/.gems:" require 'rubygems' Gem.clear_paths ## NB! key part require 'sinatra' set :env, :production disable :run require 'MY_APP_NAME.rb' run Sinatra::Application

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >