Search Results

Search found 73305 results on 2933 pages for 'copy run start'.

Page 626/2933 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • NUnit integration programmatically with spring

    - by harkon
    Hi! I have a component based architecture framework designed and I use NUnit for isolated testing - okay so far. Now I want to enable integration tests. Therefore the tests use real implementations of the existing components. Each element of the component has a life cycle (init, start and stop) and I created a NUnit component. In the start section the Console runner of the NUnit will be executed. Okay - now if I have a test fixture class in my dlls in the execution path the runner exectues them - fine! But: And this is crucial! Each to be tested implementation exists so far in the process and I want to use this instances for testing. If I use NUnit runner in the current way each instance will be created twice - and above all: I have a spring container and a implementation registry. Via this registry I can get access to all instances in the processes. But how do I give the test fixture access to the existing registry? Good: I can start the component architecture framework in the startup of the nunit runner - but this is not what I want. My guide is the apache Cactus framework (with JUnit and tomcat, JBoss etc.) Can someone help? Thanks a lot! Check: http://cone.codeplex.com

    Read the article

  • Android - Two different programs at the same time in an emulator

    - by Léa Massiot
    I'm new to Android development. My OS is WinXP. I'm trying to install two different applications on an Android Device Emulator in command line. I have two Android projects "ap1" and "ap2". In the "ap1" project directory, I ran "ant debug". I got an "ap1.apk" executable. In the "ap2" project directory, I ran "ant debug". I got an "ap2.apk" executable. I created an Android Virtual Device: cmd_line android create avd -n avd1 -t 1 --abi x86 I launched the emulator: cmd_line emulator -avd avd1 -verbose The "adb devices" command returns: List of devices attached emulator-5554 device I installed the first program on the emulator: cmd_line adb -s emulator-5554 install "ap1.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 = It worked. I installed the second program on the emulator: cmd_line adb -s emulator-5554 install "ap2.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg2.android/.AnotherActivity1 = It worked. All this works except that the second executable "replaced" of the first one. If I try to run the first executable, I get an error: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 Starting: Intent { act=android.intent.action.MAIN cmp=my.pkg.android/.Activity1 } Error type 3 Error: Activity class {my.pkg.android/my.pkg.android.Activity1} does not exist. It looks like I can't have the two apps at the same time in the emulator. What do you think? What do I have to do to have the two apps available (at the same time) in the emulator? Thank you for helping. Best regards.

    Read the article

  • 32/64 bit problems with Eclipse CDT on Ubuntu

    - by waffleShirt
    I have just recently started running Linux on my PC and I am trying to start learning OpenGL. I am using the latest version of Eclipse CDT as my IDE, and my system is Ubuntu 10.10, 64 bit version. The problem I am having is that whenever I try to run a build from within the IDE I get the error message "Launch Failed. Binary Not Found." Ive done a lot of looking around on the internet but I still cant solve the problem. I know for a fact that the binary is built, it can be run from a terminal window. According to posts I have seen the problem is that Eclipse tries to run a 32 bit binary, but GCC 4.4.5 defaults to 64 bit binaries on a 64 bit system. Ive seen a lot of information about using the -m32 flag in makefiles, but then I still get the following output in Eclipse: make all g++ -o HelloWorld2 main.o /usr/bin/ld: i386 architecture of input file `main.o' is incompatible with i386:x86-64 output /usr/bin/ld: final link failed: Invalid operation collect2: ld returned 1 exit status make: *** [HelloWorld2] Error 1 What I would like to know is how to either get Eclipse to launch the 64 bit binaries, or have Eclipse correctly compile 32 bit binaries.

    Read the article

  • Does QThread::sleep() require the event loop to be running?

    - by suszterpatt
    I have a simple client-server program written in Qt, where processes communicate using MPI. The basic design I'm trying to implement is the following: The first process (the "server") launches a GUI (derived from QMainWindow), which listens for messages from the clients (using repeat fire QTimers and asynchronous MPI receive calls), updates the GUI depending on what messages it receives, and sends a reply to every message. Every other process (the "clients") runs in an infinite loop, and all they are intended to do is send a message to the server process, receive the reply, go to sleep for a while, then wake up and repeat. Every process instantiates a single object derived from QThread, and calls its start() method. The run() method of these classes all look like this: from foo.cpp: void Foo::run() { while (true) { // Send message to the first process // Wait for a reply // Do uninteresting stuff with the reply sleep(3); // also tried QThread::sleep(3) } } In the client's code, there is no call to exec() anywhere, so no event loop should start. The problem is that the clients never wake up from sleeping (if I surround the sleep() call with two writes to a log file, only the first one is executed, control never reaches the second). Is this because I didn't start the event loop? And if so, what is the simplest way to achieve the desired functionality?

    Read the article

  • Magic function `bash ` not found

    - by inspectorG4dget
    I have a bunch of simulations that I want to run on a high-performance cluster, on which I should make reservations to get computing time. Since the reservations are bounded by time, I am developing an automation script that I can scp into the cluster and run. This script will then download the relevant simulation files, run them, and upload the results. Part of this automation script is in bash (cp, scp, etc) and the rest is in python. In order to develop this automation, I am using an IPython notebook. So far, I've coded all the python automation stuff in my IPython notebook and am trying to write the bash part of it now. However, it seems that the magic %%bash doesn't work in my IPython notebook. I get the following error when I have this code in my cell: Cell %%bash echo hi Error File "<ipython-input-22-62ec98e35224>", line 3 echo hi ^ SyntaxError: invalid syntax On a whim, I tried this: Cell %%bash print "hi" Error hi ERROR: Magic function `bash` not found. So I tried this with %%system, %%! and %%shell. But none of those work; they all give me the same error. Why is this happening? How can I fix this? Metadata: IPython 0.13.dev Python 2.7.1 Mac OS X Lion

    Read the article

  • Maven: trying to get my submodule's poms to NOT inherit a plugin in the parent

    - by jobrahms
    My project has a parent pom and several submodule poms. I've put a plugin in the parent that is responsible for building our installer distributables (using install4j). It doesn't make sense to have this plugin run on the submodules, so I've put false in the plugin's config, as seen below. The problem is, when I run mvn clean install install4j:compile it cleans, compiles, and runs the install4j plugin on the parent, but then it tries to run it on the child modules and crashes. Here's the plugin config <plugin> <groupId>com.google.code.maven-install4j</groupId> <artifactId>maven-install4j-plugin</artifactId> <version>0.1.1</version> <inherited>false</inherited> <configuration> <executable>${devenv.install4jc}</executable> <configFile>${basedir}/newinstaller/ehd-demo.install4j</configFile> <releaseId>${project.version}</releaseId> <attach>false</attach> <skipOnMissingExecutable>true</skipOnMissingExecutable> </configuration> </plugin> Am I misunderstanding the purpose of inherited=false? What is the correct way to get this to work? I'm using maven 2.2.0.

    Read the article

  • Python 3.3 Webserver restarting problems

    - by IPDGino
    I have made a simple webserver in python, and had some problems with it before as described here: Python (3.3) Webserver script with an interesting error In that question, the answer was to use a While True: loop so that any crashes or errors would be resolved instantly, because it would just start itself again. I've used this for a while, and still want to make the server restart itself every few minutes, but on Linux for some reason it won't work for me. On windows the code below works fine, but on linux it keeps saying Handler class up here ... ... class Server: def __init__(self): self.server_class = HTTPServer self.server_adress = ('MY IP GOES HERE, or localhost', 8080) global httpd httpd = self.server_class(self.server_adress, Handler) self.main() def main(self): if count > 1: global SERVER_UP_SINCE HOUR_CHECK = int(((count - 1) * RESTART_INTERVAL) / 60) SERVER_UPTIME = str(HOUR_CHECK) + " MINUTES" if HOUR_CHECK > 60: minutes = int(HOUR_CHECK % 60) hours = int(HOUR_CHECK // 60) SERVER_UPTIME = ("%s HOURS, %s MINUTES" % (str(hours), str(minutes))) SERVING_ON_ADDR = self.server_adress SERVER_UP_SINCE = str(SERVER_UP_SINCE) SERVER_RESTART_NUMBER = count - 1 print(""" SERVER INFO ------------------------------------- SERVER_UPTIME: %s SERVER_UP_SINCE: %s TOTAL_FILES_SERVED: %d SERVING_ON_ADDR: %s SERVER_RESTART_NUMBER: %s \n\nSERVER HAS RESTARTED """ % (SERVER_UPTIME, SERVER_UP_SINCE, TOTAL_FILES, SERVING_ON_ADDR, SERVER_RESTART_NUMBER)) else: print("SERVER_BOOT=1\nSERVER_ONLINE=TRUE\nRESTART_LOOP=TRUE\nSERVING_ON_ADDR:%s" % str(self.server_adress)) while True: try: httpd.serve_forever() except KeyboardInterrupt: print("Shutting down...") break httpd.shutdown() httpd.socket.close() raise(SystemExit) return def server_restart(): """If you want the restart timer to be longer, replace the number after the RESTART_INTERVAL variable""" global RESTART_INTERVAL RESTART_INTERVAL = 10 threading.Timer(RESTART_INTERVAL, server_restart).start() global count count = count + 1 instance = Server() if __name__ == "__main__": global SERVER_UP_SINCE SERVER_UP_SINCE = strftime("%d-%m-%Y %H:%M:%S", gmtime()) server_restart() Basically, I make a thread to restart it every 10 seconds (For testing purposes) and start the server. After ten seconds it will say File "/home/username/Desktop/Webserver/server.py", line 199, in __init__ httpd = self.server_class(self.server_adress, Handler) File "/usr/lib/python3.3/socketserver.py", line 430, in __init__ self.server_bind() File "/usr/lib/python3.3/http/server.py", line 135, in server_bind socketserver.TCPServer.server_bind(self) File "/usr/lib/python3.3/socketserver.py", line 441, in server_bind self.socket.bind(self.server_address) OSError: [Errno 98] Address already in use As you can see in the except KeyboardInterruption line, I tried everything to make the server stop, and the program stop, but it will NOT stop. But the thing I really want to know is how to make this server able to restart, without giving some wonky errors.

    Read the article

  • Managing multiple customer databases in ASP.NET MVC application

    - by Robert Harvey
    I am building an application that requires separate SQL Server databases for each customer. To achieve this, I need to be able to create a new customer folder, put a copy of a prototype database in the folder, change the name of the database, and attach it as a new "database instance" to SQL Server. The prototype database contains all of the required table, field and index definitions, but no data records. I will be using SMO to manage attaching, detaching and renaming the databases. In the process of creating the prototype database, I tried attaching a copy of the database (companion .MDF, .LDF pair) to SQL Server, using Sql Server Management Studio, and discovered that SSMS expects the database to reside in c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabaseName.MDF Is this a "feature" of SQL Server? Is there a way to manage individual databases in separate directories? Or am I going to have to put all of the customer databases in the same directory? (I was hoping for a little better control than this). NOTE: I am currently using SQL Server Express, but for testing purposes only. The production database will be SQL Server 2008, Enterprise version. So "User Instances" are not an option.

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • Getting around IBActions limited scope

    - by Septih
    Hello, I have an NSCollectionView and the view is an NSBox with a label and an NSButton. I want a double click or a click of the NSButton to tell the controller to perform an action with the represented object of the NSCollectionViewItem. The Item View is has been subclassed, the code is as follows: #import <Cocoa/Cocoa.h> #import "WizardItem.h" @interface WizardItemView : NSBox { id delegate; IBOutlet NSCollectionViewItem * viewItem; WizardItem * wizardItem; } @property(readwrite,retain) WizardItem * wizardItem; @property(readwrite,retain) id delegate; -(IBAction)start:(id)sender; @end #import "WizardItemView.h" @implementation WizardItemView @synthesize wizardItem, delegate; -(void)awakeFromNib { [self bind:@"wizardItem" toObject:viewItem withKeyPath:@"representedObject" options:nil]; } -(void)mouseDown:(NSEvent *)event { [super mouseDown:event]; if([event clickCount] > 1) { [delegate performAction:[wizardItem action]]; } } -(IBAction)start:(id)sender { [delegate performAction:[wizardItem action]]; } @end The problem I've run into is that as an IBAction, the only things in the scope of -start are the things that have been bound in IB, so delegate and viewItem. This means that I cannot get at the represented object to send it to the delegate. Is there a way around this limited scope or a better way or getting hold of the represented object? Thanks.

    Read the article

  • ~1 second TcpListener Pending()/AcceptTcpClient() lag

    - by cpf
    Probably just watch this video: http://screencast.com/t/OWE1OWVkO As you see, the delay between a connection being initiated (via telnet or firefox) and my program first getting word of it. Here's the code that waits for the connection public IDLServer(System.Net.IPAddress addr,int port) { Listener = new TcpListener(addr, port); Listener.Server.NoDelay = true;//I added this just for testing, it has no impact Listener.Start(); ConnectionThread = new Thread(ConnectionListener); ConnectionThread.Start(); } private void ConnectionListener() { while (Running) { while (Listener.Pending() == false) { System.Threading.Thread.Sleep(1); }//this is the part with the lag Console.WriteLine("Client available");//from this point on everything runs perfectly fast TcpClient cl = Listener.AcceptTcpClient(); Thread proct = new Thread(new ParameterizedThreadStart(InstanceHandler)); proct.Start(cl); } } (I was having some trouble getting the code into a code block) I've tried a couple different things, could it be I'm using TcpClient/Listener instead of a raw Socket object? It's not a mandatory TCP overhead I know, and I've tried running everything in the same thread, etc.

    Read the article

  • Powershell scripts to backup SQL, SVN

    - by bszom
    I'm trying to use PowerShell to create some backups, and then to copy these to a web folder (or, in other words, upload them to a WebDAV share). At first I thought I'd do the WebDAV stuff from within PowerShell, but it seems this still requires a fair amount of "manual labour", ie: constructing HTTP requests. I then settled for creating a web folder from the script and letting Windows handle the WebDAV stuff. It seems that all it takes to create a web folder is to create a standard shortcut, as described here. What I can't figure out is how to actually copy files to the shortcut's target..? Maybe I'm going about this the wrong way. It would be ideal if I could somehow encrypt the credentials for the WebDAV in the script, then have it create the web folder, shunt over the files, and delete the web folder again. Or even better, not use a web folder at all. Third option would be to just create the web folder manually and leave it there, though I'd rather not. Any ideas/pointers/tips? :)

    Read the article

  • Starting an Intent to Launch an app to Background in Android

    - by Tista
    Hi all, I'm using Wikitude API 1.1 as an AR viewer in my application. The problem with Wikitude, if I haven't launched the actual Wikitude application since the phone's bootup, I will get a NullPointerException everytime I start my own application. So I figure if I can start my app first and them check if Wikitude is installed and or running. If it's not installed, go to market n download. If it's not running, then we should run it straight to background so that my app doesn't loose its focus. // Workaround for Wikitude this.WIKITUDE_PACKAGE_NAME = "com.wikitude"; PackageManager pacMan = Poligamy.this.getPackageManager(); try { PackageInfo pacInfo = pacMan.getPackageInfo(this.WIKITUDE_PACKAGE_NAME, pacMan.GET_SERVICES); Log.i("CheckWKTD", "Wikitude is Installed"); ActivityManager aMan = (ActivityManager) this.getSystemService(ACTIVITY_SERVICE); List<RunningAppProcessInfo> runningApps = aMan.getRunningAppProcesses(); int numberOfApps = runningApps.size(); for(int i=0; i<numberOfApps; i++) { if(runningApps.get(i).processName.equals(this.WIKITUDE_PACKAGE_NAME)) { this.WIKITUDE_RUNNING = 1; Log.i("CheckWKTD", "Wikitude is Running"); } } if(this.WIKITUDE_RUNNING == 0) { Log.i("CheckWKTD", "Wikitude is NOT Running"); /*final Intent wIntent = new Intent(Intent.ACTION_MAIN, null); wIntent.addCategory(Intent.CATEGORY_LAUNCHER); final ComponentName cn = new ComponentName("com.wikitude", "com.mobilizy.wikitudepremium.initial.Splash"); wIntent.setComponent(cn); wIntent.setFlags(Intent.FLAG_ACTIVITY_NO_USER_ACTION); startActivityIfNeeded(wIntent, 0);*/ } } catch (NameNotFoundException e) { // TODO Auto-generated catch block Log.i("CheckWKTD", "Wikitude is NOT Installed"); e.printStackTrace(); //finish(); } The part I block commented is the intent to start Wikitude. But I always failed in restricting Wikitude to background. Any help? Thanks before. Best, Tista

    Read the article

  • Most cost effective way to target multiple mobile platforms

    - by niidto
    Hi I have been given the tasks of speccing a mobile application, which will need to run on approx. 1000 devices. These devices already exist, and consist of iPhones, BlackBerrys, Androids, Windows Mobile and Netbooks. The application will have simple reporting capability, and a collection of forms. Anyway, the obvious solution would be to develop some browser based solution, although given the occasionally connected nature of the devices, there's a potential for data to get lost / not saved. So instead of creating a complex application for each platform, I was thinking we could build what is effectively a form generator, with basic offline storage capability (text files), designed to run on each device, and have the device generate a form, based on for example an XML file that it could request from a server somewhere, resulting in minimal specialist development costs, and the ability to run most of the logic from the server end, with the devices being dumb clients that render forms and upload the data when there is an available connection. Anyway, my question summarised is, how have you made the decision on supporting multiple devices for your application. Is this always an unavoidable problem, and you just have to make the call to support 1 or 2, or pay for developers to write code for each platform, or alternatively supply pre-installed devices to the company? Many thanks James

    Read the article

  • How to detect changing directory size in Perl

    - by materiamage
    Hello, I am trying to find a way of monitoring directories in Perl, in particular the size of a directory, and upon detecting a change in directory size, perform a particular action. The issue I have is with large files that require a noticeable amount of time to copy into this directory, i.e. 100MB. What happens (in Windows, not Unix) is the system reserves enough disk space for the entire file, even though the file is still copying in progress. This causes problems for me, because my script will try to perform an action on this file that has not finished copying over. I can easily detect directory size changes in Unix via 'du', but 'du' in Windows does not behave the same way. Are there any accurate methods of detecting directory size changes in Perl? Edit: Some points to clarify: - My Perl script is only monitoring a particular directory, and upon detecting a new file or a new directory, perform an action on this new file or directory. It is not copying any files; users on the network will be copying files into the directory I am monitoring. - The problem occurs when a new file or directory appears (copied, not moved) that is significantly large ( 100MB, but usually a couple GB) and my program fires before this copy completes - In Unix I can easily 'du' to see that the file/directory in question is growing in size, and take the appropriate action - In Windows the size is static, so I cannot detect this change - opendir/readdir/closedir is not feasible, as some of the directories that appear may contain thousands of files, and I want to avoid the overhead of Ideally I would like my program to be triggered on change, but I am not sure how to do this. As of right now it busy waits until it detects a change. The change in file/directory size is not in my control.

    Read the article

  • Moving information between databases

    - by williamjones
    I'm on Postgres, and have two databases on the same machine, and I'd like to move some data from database Source to database Dest. Database Source: Table User has a primary key Table Comments has a primary key Table UserComments is a join table with two foreign keys for User and Comments Dest looks just like Source in structure, but already has information in User and Comments tables that needs to be retained. I'm thinking I'll probably have to do this in a few steps. Step 1, I would dump Source using the Postgres Copy command. Step 2, In Dest I would add a temporary second_key column to both User and Comments, and a new SecondUserComments join table. Step 3, I would import the dumped file into Dest using Copy again, with the keys input into the second_key columns. Step 4, I would add rows to UserComments in Dest based on the contents of SecondUserComments, only using the real primary keys this time. Could this be done with a SQL command or would I need a script? Step 5, delete the SecondUserComments table and remove the second_key columns. Does this sound like the best way to do this, or is there a better way I'm overlooking?

    Read the article

  • Is there a way to programatically check dependencies of an EXE?

    - by Mason Wheeler
    I've got a certain project that I build and distribute to users. I have two build configurations, Debug and Release. Debug, obviously, is for my use in debugging, but there's an additional wrinkle: the Debug configuration uses a special debugging memory manager, with a dependency on an external DLL. There's been a few times when I've accidentally built and distributed an installer package with the Debug configuration, and it's then failed to run once installed because the users don't have the special DLL. I'd like to be able to keep that from happening in the future. I know I can get the dependencies in a program by running Dependency Walker, but I'm looking for a way to do it programatically. Specifically, I have a way to run scripts while creating the installer, and I want something I can put in the installer script to check the program and see if it has a dependency on this DLL, and if so, cause the installer-creation process to fail with an error. I know how to create a simple CLI program that would take two filenames as parameters, and could run a DependsOn function and create output based on the result of it, but I don't know what to put in the DependsOn function. Does anyone know how I'd go about writing it?

    Read the article

  • Keeping dates in order when using date_select and discarding year in Rails?

    - by MikeH
    My app has users who have seasonal products. When a user selects a product, we allow him to also select the product's season. We accomplish this by letting him select a start date and an end date for each product. We're using date_select to generate two sets of drop-downs: one for the start date and one for the end date. Including years doesn't make sense for our model. So we're using the option: discard_year => true To explain our problem, consider that our products are apples. Vendor X carries apples every year from September to January. Years are irrelevant here, and that's why we're using discard_year => true. However, while the specific years are irrelevant, the relative point in time from the start date to the end date is relevant. This is where our problem arises. When you use discard_year => true, Rails does set a year in the database, it just doesn't appear in the views. Rails sets all the years to 0001 in our app. Going back to our apple example, this means that the database now thinks the user has selected September 0001 to January 0001. This is a problem for us for a number of reasons. To solve this, the logic that I need to implement is the following: - If season_start month/date is before season_end month/date, then standard Rails approach is fine. - But, if season_start month/date is AFTER season_end month/date, then I need to dynamically update the database field such that the year for season_end is equal to the year for season_start + 1. My best guess is that I would create a custom method that runs as an after_save or after_update in my products model. But I'm not really sure how to do this. Ideas? Anybody ever had this issue? Thanks!

    Read the article

  • Easiest way to programmatically create email accounts for use by web app?

    - by cadwag
    I have a web app that create groups. Each group gets their own discussion board. I would like to add the feature of allowing users to send emails to their "group" within the web app to start a new discussion or reply to an email from the "group" to make a new post in an already ongoing discussion. For example, to start a new discussion a user would send: From: [email protected] To: [email protected] Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? All members of the group would receive an email: From: [email protected] Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? Reply-To: group1@ example.com And, the app would start a new discussion with: Author: Bill Fake Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? This is a pretty standard feature for Google Groups and other big sites. So how do us mere mortals go about implementing this? Is there an easy way? Or do I: 1. Install postfix 2. Write scripts to create new accounts for each new group 3. Access the server periodically via pop3 (or imap?) to retrieve the email messages sent to each account? 4. Parse the message for content If it's the latter, did I miss a step?

    Read the article

  • Examples using Active Directory/LDAP groups for permissions \ roles in Rails App.

    - by Nick Gorbikoff
    Hello. I was wondering how other people implemented this scenario. I have an internal rails app ( inventory management, label printing, shipping,etc). I'm rewriting security on the system, cause the old way got to cumbersome to maintain ( users table, passwords, roles) - I used restful_authentication and roles. It was implemented about 3 years ago. I already implemented AuthLogic with ruby-ldap-net to authenticate users ( actually that was surprisingly easy, compared to how I struggled with other frameworks/languages before). Next step is roles. I already have groups defined in Active Directory - so I don't want to run a separate roles system in my rails app, I just want to reuse Active Directory groups - since that part of the system is already maintained for other purposes ( shared drives, backups, pc access, etc) So I was wondering if others had experience implementing permissions/roles in a rails app based on groups in Active Directory or LDAP. Also the roles requirements are pretty complex. Here is an example: For instance I have users that belong to the supervisors group in AD and to inventory dept, so I was that user to be able to run "advanced" tasks in invetory - adjust qty, run reports, however other "supervisors" from other departmanets, shouldn't be able to do this, also Top Management - should be able to use those reports (regardless weather they belong to the invetory or not), but not Middle Management, unless they are in inventory group. Admins of the system (Domain Admins) should have unrestricted access to the system , except for HR & Finances part unless they are in HR ( like you don't want all system admins (except for one authorized one) to see personal info of other employees). I looked at acl9, cancan, aegis. I was wondering if there are any advantaged/cons to using one versus the other for this particular use of system access based on AD. Suggest other systems if you had good experience. Thank you!!!

    Read the article

  • Transitioning from Domain Authentication to SQL Server Authentication

    - by Albert Perrien
    Greetings all, I've run into a problem that has me stumped. I've put together a database in SQL Server Express, and I'm having a strange permissions problem. The database is on my development machine with a domain user: DOMAIN\albertp. My development database server is set for "SQL Server and Windows Authentication" mode. I can edit and query my database without any problems when I log in using Windows Authentication. However, when I log in to any user that uses SQL Server authentication (Including sa) I get this message when I run queries against my database. SELECT * FROM [Testing].[dbo].[AuditingReport] I get: Msg 18456, Level 14, State 1, Line 1 Login failed for user 'auditor'. I'm logged into the server from SQL Server Management Studio as 'auditor' and I don't see anything in the error log about the login failure. I've already run: Use Testing; Grant All to auditor; Go And I still get the same error. What permissions do I have to set for the database to be usable by others outside of my personal domain login? Or am I looking at the wrong problem? My ultimate goal is to have the database be accessible from a set of PHP pages, using a either a common login (hence 'auditor') or a login specific to a set of individual users.

    Read the article

  • Where should I store shared resources between LocalSystem and regular user with UAC?

    - by rwired
    My application consists of two parts: A Windows Service running under the LocalSystem account and a client process running under the currently logged in regular user. I need to deploy the application across Windows versions from XP up to Win7. The client will retrieve files from the web and collect user data from the user. The service will construct files and data of it's own which the client needs to read. I'm trying to figure out the best place (registry or filesystem, or mix) to store all this. One file the client or service needs to be able to retrieve from the net is an update_patch executable which needs to run whenever an upgrade is available. I need to be sure the initial installer SETUP.EXE, and also the update_patch can figure out this ideal location and set a RegKey to be read later by both client and server telling them the magic location (The SETUP.EXE will run with elevated privileges since it needs to install the service) On my Win7 test system the service %APPDATA% points to: C:\Windows\system32\config\systemprofile\AppData\Roaming and the %APPDATA% of the client points to: C:\Users\(username)\AppData\Roaming Interestingly Google Chrome stores everything (App and Data) in C:\Users\(username)\AppData\Local\Google\Chrome Chrome runs pretty much in exactly the way I want my suite to run (able to silently update itself in the background) What I'm trying to avoid is nasty popups warning the user that the app wants to modify the system, and I want to avoid problems when VirtualStore doesn't exist because the user is running XP/2000/2003 or has UAC turned off. My target audience are non-tech-savvy general Windows users.

    Read the article

  • Visual studio feature - commenting code Ctrl K - Ctrl C

    - by Michael
    I commented on this answer some time ago regarding how visual studio comments out code with // or /* */. I was thinking to revise the answer (to include my findings) but I had to test it first, which kind of confused me. My finding is that depending on where your marker is when you press Ctrl - K, Ctrl - C you will get either // or /* */. So I tried it out on the following code: [1] [2]FD_ZERO(&mFSet); FD_SET(user->mSender, &mFSet); timeval zeroTime = { 0, 0 }; int sel = select(0, NULL, &mFSet, NULL, &zeroTime); [3] if (sel == SOCKET_ERROR){ [5]return false; [4] } if (sel == 0){ [6] return false; [7] } The [x] is markerpositions. All [1] positions give // for all marked lines. However Start position End Position: gives: [2] [3] : /* */ [2] [4] : // [2] [5] : /* */ [2] [more than 5] : // [5] [6] : /* */ [5] [7] : // I guess it has to do with forward indentation (not backwards), that whenever code is indented more than the starting line you get // except when you haven't selected any text on the indented line ([2] [5]). But why the distinction? Why not use /* */ for when you start at [2] and // when you start at [1]?

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • How to update maven local repository with newer artifacts from a remote repository?

    - by Richard
    My maven module A has a dependency on another maven module B provided by other people. When I run "mvn install" under A for the first time, maven downloads B-1.0.jar from a remote repository to my local maven repository. My module A builds fine. In the mean time, other people are deploying newer B-1.0.jar to the remote repository. When I run "mvn install" under A again, maven does not download the newer B-1.0.jar from the remote repository to my local repository. As a result, my module A build fails due to API changes in B-1.0.jar. I could manually delete B-1.0.jar from my local repository. Then maven would download the latest B-1.0.jar from the remote repository the next time when I run "mvn install". My question is how I can automatically let maven download the latest artifacts from a remote repository. I tried to set updatePolicy to "always". But that did not do the trick.

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >