Search Results

Search found 11195 results on 448 pages for 'disconnected environment'.

Page 9/448 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • How to maintain base files for development environment central while allowing people to change their

    - by Ittai
    Hi, what I'd like to do is have files in a central location so that when I add people to my development team they can see the base version of these files but meanwhile have the ability for the rest of the team to work with their own local version. I know I can just put the files in source-control (we use Tortoiese-SVN) and have my team change the local versions but I'd rather not as the exclamation mark signaling the file has been changed and needs to be committed, quite frankly, irritates me greatly. I'll give two examples of what I mean: We use quite a few build.xml files which relate to a single properties files which contains many definitions. Some of them can be different between team-members (mainly temporary working directories) and I'd like a new team-member to have the ability to get the properties file with the base config but change it if they wish. Have the eclipse settings file in the SVN so that when a new team-member joins they can just retrieve the files from the server and have a base system running. If they wish they will be able to change some of these settings. Thanks, Ittai

    Read the article

  • Unable to resolve user environment variable correctly

    - by Junaid
    I am trying to resolve %USERPROFILE% using WScript.Shell. When I create a vbs file and run directly from Windows, I get the correct path for the logged-in user C:\Documents and Settings\Administrator but it gets resolved to C:\Documents and Settings\Default User instead of logged-in user when I used it inside my classic ASP webapp running on the local machine on IIS. The code I used is as below var oShell = new ActiveXObject("Wscript.Shell"); var userPath = oShell.ExpandEnvironmentStrings("%USERPROFILE%"); Is there a permission/setting which I need to check to get correct value of USERPROFILE when retrieving value from the webapp? PS: I am using javascript to code.

    Read the article

  • App.config settings, environment variable as partial path

    - by Jean-Bernard Pellerin
    I'm new to tinkering with app.config and xml, and am currently doing some refactoring in some code I haven't written. Currently we have a snippet which looks like this: <setting name="FirstSetting" serializeAs="String"> <value>Data Source=C:\Documents and Settings\All Users\ApplicationData\Company ...;Persist Security Info=False</value> What I'd like to do is have it instead point to something like ${PROGRAMDATA}\Company\... How can I achieve this, keeping in mind that PROGRAMDATA will not always point to C:\ProgramData ?

    Read the article

  • BizTalk 2009 - Architecture Decisions

    - by StuartBrierley
    In the first step towards implementing a BizTalk 2009 environment, from development through to live, I put forward a proposal that detailed the options available, as well as the costs and benefits associated with these options, to allow an informed discusion to take place with the business drivers and budget holders of the project.  This ultimately lead to a decision being made to implement an initial BizTalk Server 2009 environment using the Standard Edition of the product. It is my hope that in the long term, as projects require it and allow, we will be looking to implement my ideal recommendation of a multi-server enterprise level environment, but given the differences in cost and the likely initial work load for the environment this was not something that I could fully recommend at this time.  However, it must be noted that this decision was made in full awareness of the limits of the standard edition, and the business drivers of this project were made fully aware of the risks associated with running without the failover capabilities of the enterprise edition. When considering the creation of this new BizTalk Server 2009 environment, I have also recommended the creation of the following pre-production environments:   Usage Environment Development Development of solutions; Unit testing against technical specifications; Initial load testing; Testing of deployment packages;  Visual Studio; BizTalk; SQL; Client PCs/Laptops; Server environment similar to Live implementation; Test Testing of Solutions against business and technical requirements;  BizTalk; SQL; Server environment similar to Live implementation; Pseudo-Live As Live environment to allow testing against Live implementation; Acts as back-up hardware in case of failure of Live environment; BizTalk; SQL; Server environment identical to Live implementation; The creation of these differing environments allows for the separation of the various stages of the development cycle.  The development environment is for use when actively developing a solution, it is a potentially volatile environment whose state at any given time can not be guaranteed.  It allows developers to carry out initial tests in an environment that is similar to the live environment and also provides an area for the testing of deployment packages prior to any release to the test environment. The test environment is intended to be a semi-volatile environment that is similar to the live environment.  It will change periodically through the development of a solution (or solutions) but should be otherwise stable.  It allows for the continued testing of a solution against requirements without the worry that the environment is being actively changed by any ongoing development.  This separation of development and test is crucial in ensuring the quality and control of the tested solution. The pseudo-live environment should be considered to be an almost static environment.  It should mimic the live environment and can act as back up hardware in the case of live failure.  This environment acts as an area to allow for “as live” testing, where the performance and behaviour of the live solutions can be replicated.  There should be relatively few changes to this environment, with software releases limited to “release candidate” level releases prior to going live. Whereas the pseudo-live environment should always mimic the live environment, to save on costs the development and test servers could be implemented on lower specification hardware.  Consideration can also be given to the use of a virtual server environment to further reduce hardware costs in the development and test environments, indeed this virtual approach can also be extended to pseudo-live and live assuming the underlying technology is in place. Although there is no requirement for the development and test server environments to be identical to live, the overriding architecture implemented should be the same as in live and an understanding must be gained of the performance differences to be expected across the different environments.

    Read the article

  • How can I pass environment variables to a WSGI script, using uWSGI?

    - by orokusaki
    I've added the following line to /etc/environment: FOO_DEPLOYMENT_ENV="vbox" Upon logging in via SSH, I can echo $FOO_DEPLOYMENT_ENV and, of course, see vbox output to the shell. If I open a Python shell and run os.getenv('FOO_DEPLOYMENT_ENV'), it will return 'vbox', but the same code in my Python application, when run by uWSGI (as the www-data user), it does not see the environment variable. Clearly, this isn't a problem of uWSGI, and is rather a problem with my understanding of environment variables, or how they're properly set, and the contexts in which they can be retrieved. What am I doing or understanding incorrectly?

    Read the article

  • Ubuntu wired network disconnected

    - by Deep
    I am not able to establish a wired network connection between two computers on which I just installed Ubuntu 10.04. I am new to this environment. Unlike in the Windows environment, where it happens by just connecting them with a cable, Ubuntu keeps flashing a notification saying "Wired network disconnected". Am I missing a driver or something? I am able to connect to the wi-fi router without issue. The wired connection is just not working.

    Read the article

  • Should Development / Testing / QA / Staging environments be similar?

    - by Walter White
    Hi all, After much time and effort, we're finally using maven to manage our application lifecycle for development. We still unfortunately use ANT to build an EAR before deploying to Test / QA / Staging. My question is, while we made that leap forward, developers are still free to do as they please for testing their code. One issue that we have is half our team is using Tomcat to test on and the other half is using Jetty. I prefer Jetty slightly over Tomcat, but regardless we using WAS for all the other environments. My question is, should we develop on the same application server we're deploying to? We've had numerous bugs come up from these differences in environments. Tomcat, Jetty, and WAS are different under the hood. My opinion is that we all should develop on what we're deploying to production with so we don't have the problem of well, it worked fine on my machine. While I prefer Jetty, I just assume we all work on the same environment even if it means deploying to WAS which is slow and cumbersome. What are your team dynamics like? Our lead developers stepped down from the team and development has been a free for all since then. Walter

    Read the article

  • localhost + staging + production environments?

    - by Kentor
    Hello, I have a website say www.livesite.com which is currently running. I have been developing a new version of the website on my local machine with http://localhost and then committing my changes with svn to www.testsite.com where I would test the site on the livesite.com server but under another domain (its the same environment as the live site but under a different domain). Now I am ready to release the new version to livesite.com. Doing it the first time is easy, I could just copy & paste everything from testsite.com to livesite.com (not sure its the best way to do it). I want to keep testsite.com as a testing site where I would push updates, test them and once satisfied move to livesite.com but I am not sure how to do that after the new site is launched.. I don't think copy pasting the whole directory is the right way of doing it and it will break the operations of current users on the livesite.com. I also want to keep my svn history on testsite.com. What is the correct way of doing this with SVN ? Thank you so much!

    Read the article

  • How to load the environment variables at boot time before X11 on Ubuntu Precise?

    - by Fnux
    Using Ubuntu Precise 64 bit, I'm facing a problem that I'm unable to solve and that I'll try to describe below: I'm using a console mode program (let's say abc) that uses Go, NodeJS, Java and Scala. In order for abc to work with these languages, I've to declare the following statements: a) within /etc/environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" CLASSPATH=$CLASSPATH:/usr/share/java/scala-library.jar b) within /etc/login.defs ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin ENV_PATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin c) a) within /etc/sudoers: `# env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin"` Then, when I start abc from a terminal, all is fine and I can use any of the 4 languages described above. However, if I put a script within /etc/init.d that starts abc during the boot process (i.e. before to start the GUI), using Java from abc still is fine, but using Go, NodeJS or Scala doesn't work anymore. Then, I guess that during the boot process, the script within /etc/init.d that starts abc is executed before that the different environment variables set within /etc/sudoers, /etc/environment and /etc/login.defs are loaded. So, my question is: how to force the environment variables to be loaded before that my script starting abc is launched? Any help and advice on this topic would be trully appreciated. TIA. Cheers. Thanks again to Mark and Danila. Below is the current "abc" script file that I put within /etc/init.d `#! /bin/sh ### EDIT: ADD THIS VARS DEFINITIONS: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" CLASSPATH=$CLASSPATH:/usr/share/java/scala-library.jar "ENV_SUPATH PATH"="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" "ENV_PATH PATH"="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" "Defaults secure_path"="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" ##### EXPORT this VARS so they are accessible to children:" export "PATH" "CLASSPATH" "ENV_SUPATH PATH" "ENV_PATH PATH" "Defaults secure_path" `### BEGIN INIT INFO `# Provides: abc `# Required-Start: $remote_fs $syslog `# Required-Stop: $remote_fs $syslog `# Default-Start: 2 3 4 5 `# Default-Stop: 0 1 6 `# Short-Description: abc initscript `# Description: This iniscript starts and stops abc `### END INIT INFO `# Author: Fnux, fnux.fl at gmail dot com `# Version: 1.2 `# Note: (edit ABC_PATH if abc isn't installed in /opt/abc) NAME=abc ABC_PATH=/opt/abc START="-d" STOP="-k" VERSION="-v" SCRIPTNAME=/etc/init.d/$NAME STARTMESG="\nStarting abc in deamon mode." UPMESG="\n$NAME is running." DOWNMESG="\n$NAME is not running." STATUS=`pidof $NAME` `# Exit if abc is not installed [ -x "$ABC_PATH/$NAME" ] || exit 0 case "$1" in start) echo $STARTMESG cd $ABC_PATH ./$NAME $START ;; stop) cd $ABC_PATH ./$NAME $STOP ;; status) if [ "$STATUS" > 0 ] ; then echo $UPMESG else echo $DOWNMESG fi ;; restart) cd $ABC_PATH ./$NAME $STOP echo $STARTMESG ./$NAME $START ;; version) cd $ABC_PATH ./$NAME $VERSION ;; *) echo "Usage: $SCRIPTNAME {start|status|restart|stop|version}" >&2 exit 3 ;; esac : So, where and how should I write the needed environment variables for: a) Go needs the following statements (ie: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin" ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin ENV_PATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin `# env_reset Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin") b) and Scala needs this one: (ie CLASSPATH=$CLASSPATH:/usr/share/java/scala-library.jar). TIA for an explanation how to do so. Cheers.

    Read the article

  • Different PATH environment variable for 32bit and 64bit Windows - is it possible?

    - by Piotr Dobrogost
    Is it possible to have whole or part of PATH environment variable specific to the type of running process's image (32bit/64bit)? When I run some app from within 64bit cmd.exe I would like to have it pick the 64bit version of OpenSSL library whereas when I run some app from within 32bit cmd.exe I would like to have it pick the 32bit version of OpenSSL library. FOLLOW UP where.exe does not find OpenSSL libs when %ProgramFiles% variable is used in the PATH environment variable

    Read the article

  • Good, simple reasons for having a multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't phased because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I cant really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

  • Good, simple reasons for having multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't fazed because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I can't really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

  • Abnormally disconnected TCP sockets and write timeout

    - by James
    Hello I will try to explain the problem in shortest possible words. I am using c++ builder 2010. I am using TIdTCPServer and sending voice packets to a list of connected clients. Everything works ok untill any client is disconnected abnormally, For example power failure etc. I can reproduce similar disconnect by cutting the ethernet connection of a connected client. So now we have a disconnected socket but as you know it is not yet detected at server side so server will continue to try to send data to that client too. But when server try to write data to that disconnected client ...... Write() or WriteLn() HANGS there in trying to write, It is like it is wating for somekind of Write timeout. This hangs the hole packet distribution process as a result creating a lag in data transmission to all other clients. After few seconds "Socket Connection Closed" Exception is raised and data flow continues. Here is the code try { EnterCriticalSection(&SlotListenersCriticalSection); for(int i=0;i<SlotListeners->Count;i++) { try { //Here the process will HANG for several seconds on a disconnected socket ((TIdContext*) SlotListeners->Objects[i])->Connection->IOHandler->WriteLn("Some DATA"); }catch(Exception &e) { SlotListeners->Delete(i); } } }__finally { LeaveCriticalSection(&SlotListenersCriticalSection); } Ok i already have a keep alive mechanism which disconnect the socket after n seconds of inactivity. But as you can imagine, still this mechnism cant sync exactly with this braodcasting loop because this braodcasting loop is running almost all the time. So is there any Write timeouts i can specify may be through iohandler or something ? I have seen many many threads about "Detecting disconnected tcp socket" but my problem is little different, i need to avoid that hangup for few seconds during the write attempt. So is there any solution ? Or should i consider using some different mechanism for such data broadcasting for example the broadcasting loop put the data packet in some kind of FIFO buffer and client threads continuously check for available data and pick and deliver it to themselves ? This way if one thread hangs it will not stop/delay the over all distribution thread. Any ideas please ? Thanks for your time and help. Regards Jams

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • How to use a common library of environment variables among different languages?

    - by JDS
    We have three main languages with which we perform system tasks: Bash, Ruby, and PHP, and Perl. Four, four main languages. We use managed environment variables to provide authorization info that automated scripts need. For example, a mysql user account and password. We'd like to use one single managed file to maintain these variables. In some instances, for example, in cron, these environment variables are not available. They are made available in CLI scripts because we source the env file in everyone's profile. But something like cron doesn't do that. On the CLI, when the env file is sourced, any given script can access those variables. Bash has them directly, PHP in $_ENV, ruby in ENV, etc. We can't source the file into non-Bash scripts, because most languages implement shell commands by running them in a subshell. We considered parsing the Bash, converting to the script's lang, and running the equivalent of "exec(parsed_output)" on the resulting strings. What is a good solution to providing managed environment vars to scripts running in cron, or similar?

    Read the article

  • Production and Test Server using Git

    - by Mike Silvis
    I am running a PHP - MySQL website, and have set up a remote repository on my own server using Git. I now want a way to be able to have a production and a test server, and some how be able to push my changes from dev to production easily. and seamlessly.

    Read the article

  • Ruby on Rails App not starting in production mode

    - by Ermin
    Everything works fine in development mode, but when I try to start my app in production mode (RAILS_ENV=production script/server) I get the following error: /opt/ruby1.8/lib/ruby/gems/1.8/gems/searchlogic-2.4.19/lib/searchlogic/named_scopes/conditions.rb:81:in `method_missing': protected method `scope' called for #<Class:0x7f41de524410> (NoMethodError) from /opt/ruby1.8/lib/ruby/gems/1.8/gems/searchlogic-2.4.19/lib/searchlogic/named_scopes/association_conditions.rb:19:in `method_missing' from /opt/ruby1.8/lib/ruby/gems/1.8/gems/searchlogic-2.4.19/lib/searchlogic/named_scopes/association_ordering.rb:27:in `method_missing' from /opt/ruby1.8/lib/ruby/gems/1.8/gems/searchlogic-2.4.19/lib/searchlogic/named_scopes/ordering.rb:30:in `method_missing' from /opt/ruby1.8/lib/ruby/gems/1.8/gems/searchlogic-2.4.19/lib/searchlogic/named_scopes/or_conditions.rb:28:in `method_missing' from /opt/ruby1.8/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1959:in `method_missing_without_paginate' from /opt/ruby1.8/lib/ruby/gems/1.8/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:170:in `method_missing' from /opt/ruby1.8/lib/ruby/gems/1.8/gems/acts_as_commentable-3.0.0/lib/comment_methods.rb:12:in `included' from .../app/models/comment.rb:2:in `include' from .../app/models/comment.rb:2 from /opt/ruby1.8/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /opt/ruby1.8/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require'... Now it seems to me it that the acts_as_commentable gem is causing this. But how come, it works fine in development mode.

    Read the article

  • How to handle javascript & css files across a site?

    - by Industrial
    Hi everybody, I have had some thoughts recently on how to handle shared javascript and css files across a web application. In a current web application that I am working on, I got quite a large number of different javascripts and css files that are placed in an folder on the server. Some of the files are reused, while others are not. In a production site, it's quite stupid to have a high number of HTTP requests and many kilobytes of unnecessary javascript and redundant css being loaded. The solution to that is of course to create one big bundled file per page that only contains the necessary information, which then is minimized and sent compressed (GZIP) to the client. There's no worries to create a bundle of javascript files and minimize them manually if you were going to do it once, but since the app is continuously maintained and things do change and develop, it quite soon becomes a headache to do this manually while pushing out new updates that features changes to javascripts and/or css files to production. What's a good approach to handle this? How do you handle this in your application?

    Read the article

  • Programmatically retrieve disconnected network adapter information in .NET

    - by Soo Wei Tan
    I have an application written in C# that needs to retrieve information like IP address, subnet mask from a disconnected network adapter. I've tried using various methods such as WMI and the .NET NetworkAdapter class but they don't return any useful data when the network adapter is disconnected. I'm pretty sure Windows keeps this information somewhere, since I can apply network settings using netsh and it appears correctly in the Control Panel. One thing that worked for me in XP was to parse the output of the netsh tool and it would return information even for a disconnected adapter. However, this doesn't seem to work on Windows 7. Win XP output: Configuration for interface "Local Area Connection 5" DHCP enabled: No IP Address: 169.254.0.128 SubnetMask: 255.255.255.0 InterfaceMetric: 0 Win7 output: Configuration for interface "Local Area Connection 2" DHCP enabled: No InterfaceMetric: 5 Any ideas?

    Read the article

  • NHibernate auditing in disconnected mode

    - by Ciaran
    I'm developing an app with a Silverlight UI, transferring my domain objects over WCF and persisting them via NHibernate. I'm therefore working with NHibernate in a disconnected mode. I'm already using the NHibernate PreUpdate and PreInsert EventListeners to perform some metadata operations (updating Create/Update date, created/updated by etc) and they are working fine. I now have a requirement to perform data logging on some of my domain objects. So I will need to have an audit table that has a before-save and after-save state of certain entities. I had wanted to use the @event.Persister.OldState and @event.Persister.NewState to perform this logging, but because I am in a disconnected scenario (using different Sessions from when data is retrieved to when it is persisted), @event.Persister.OldState is null when I am saving my changes back to the database. How is anyone else doing data logging in a disconnected scenario with NHibernate?

    Read the article

  • is there a small portable linux with good development environment?

    - by Sriram
    let me put it this way..! i use windows/ my company wants me to use windows i like Linux i don't want to use cygwin i want a simple portable Linux with a development environment aka( make,gcc,g++,llvm,...) with a bash and vi is enough for me no need any gui. these 4 points never change. ;) i tried damn small Linux.. its awesome but it doesn't have what i need. so is there a portable Linux distribution that i can run from windows using qemu or something with a good up2date development environment? thanks in advance

    Read the article

  • where.exe does not find OpenSSL libs when %ProgramFiles% variable is used in the PATH environment variable

    - by Piotr Dobrogost
    I installed both 32bit and 64bit version of OpenSSL libs on Vista x64. The 32bit version was installed in c:\Program Files (x86)\OpenSSL and the 64bit version was installed in c:\Program Files\OpenSSL. Then I added the entry %ProgramFiles%\OpenSSL to the PATH environment variable. %ProgramFiles%\OpenSSL is expanded to c:\Program Files (x86)\OpenSSL for 32bit programs and it's expanded to c:\Program Files\OpenSSL for 64bit programs. The idea is to have 32bit programs use 32bit version of OpenSSL libs and 64bit programs use 64bit version. I wanted to check if this works by running 32bit cmd.exe and issuing where ssleay32.dll and then by running 64bit cmd.exe and issuing the same. However in both cases I get the error INFO: Could not find files for the given pattern(s). What's wrong? This is a follow up to Different PATH environment variable for 32bit and 64bit Windows - is it possible?

    Read the article

  • Schedled Tasks and Environment Variables

    - by Andrew J. Brehm
    I have a scheduled task, a batch file, that uses an environment variables which is set system-wide. On server 1, the scheduled task runs under a domain account and the environment variable works. The environment variable also exists in my session and when I runas as the service account. On server 2, the scheduled task runs under a different domain account and the environment variable DOES NOT work. However, the environment variable does exist in my session and when I runas as the service account. On both servers the environment variable has been set system-wide by the same script originally. The script runs again every now and then and as far as I can see noone has tempered with the environment variable. The scheduled tasks are set up identically on the two servers (using the same XML file) and the two service accounts are identically configured (as far as I know). What am I doing wrong?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >