Search Results

Search found 54052 results on 2163 pages for 'run configuration'.

Page 34/2163 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Nginx dynamic upstream configuration / routing

    - by Dan Sosedoff
    I was experimenting with dynamic upstream configuration for nginx and cant find any good solution to implement upstream configuration from third-party source like redis or mysql. The idea behind it is to have a single file configuration in primary server and proxy requests to various app servers based on environment conditions. Think of dynamic deployments where you have X servers that are running Y workers on different ports. For instance, i create a new app and deploy. App manager selects a server and then rolls out a worker (Ruby/PHP/Python) and then reports the ip:port to the central database with status "up". At this time when i go to the given url nginx should proxy all requests to the specified ip:port upstream. The whole thing is pretty similar to what heroku does, except this proof-of-concept is not supposed to be production ready, mostly for internal needs. The easiest solution i found was using resolver with ruby-based DNS server. It works, nginx gets the IP address correctly, but the only problem is that you cant define port number for that IP. Second solution (which i havent tried yet) is to roll something else as a proxy server, maybe written in Erlang. In this case we need to use something to serve static content. Any ideas how to implement this in more flexible and stable way? P.S. Some research options: http://openresty.org/#DynamicRoutingBasedOnRedis https://github.com/nodejitsu/node-http-proxy

    Read the article

  • MultiPath configuration on RHEL5 and Clariion CX-300

    - by Kamil Z
    I have problem with discovering my FC-connected CX-300 storage. Frankly speaking I'm complete novice in FibreChannel, so step by step explanation would be appreciated. My configuration consist of two IBM HS20 blades with RHEL5.4 on board and 2x Qlogic ISP2422-based 4Gb Fibre Channel HBAs on each blade. As a FC switch there are two Brocades built in BladeCenter Chassis, and finally there is EMC Clariion CX-300. CX300, and Brocade switches should be configured properly, because they were working fine with previous configuration, which main defference was RHEL3 instead RHEL5.4 Below there is my output from several usefull commands: #lspci | grep Fibre 06:01.0 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) 06:01.1 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) #lsmod | grep qla qla2xxx 1084741 0 scsi_transport_fc 37577 1 qla2xxx scsi_mod 141717 10 scsi_dh,qla2xxx,sg,scsi_transport_fc,usb_storage,libata,mptspi,mptscsih,scsi_transport_spi,sd_mod #cat /proc/scsi/scsi Attached Devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: LSILOGIC Model: 1030 IM IM Rev: 1000 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 I'd followed instructions from this site (editing /etc/multipath.conf), but i failed after multipath -ll - the output was empty. Do you have any suggestions about discovering FC Connected LUNs in such configuration?

    Read the article

  • DCOM configuration: accounts with same name but different passwords problem

    - by archimed7592
    Hello, everybody! I'm experiencing troubles with DCOM configuration. Here is the case: I'm using some product which supports client-server interaction through DCOM, but the client won't get any access to the server if the attempt is being done from an account with a name which exists at the server as well, but has different password. Basically, if we try to access the server from the Administrator account which obviously present on the server machine, we will fail if client's Administrator password doesn't match server's one. After actively collaborating with the product's developer in attempts to localize the issue, he come across with resolution "can't be fixed" or, if you prefer to call a pikestaff a pikestaff than it's more likely a "don't know how to fix" resolution :). I believe there is a solution for this problem and I'm asking you, IT professionals, to help me out with this one. I do realize that the problem may be caused by the way the developer interact with DCOM and if so it can't be fixed be means of pure system configuration and the question should be asked at SO, but since I've bumped into the same behavior while working with file/printer sharing - Windows tried to simplify everything and used currently impersonated credentials to access the share, I hope the solution lies at system configuration layer. P.S. I believe that the actual software product I'm talking about is entirely irrelevant however my experience tell me that there always would be somebody who will think that it on the contrary is very relevant. Here it is: SpRecord.

    Read the article

  • Reloading NAT configuration on a running VMWare Server 2.0.2

    - by Jonathan Clarke
    I have a server running VMWare Server 2.0.2. The host is Debian Lenny. I have 15-20 virtual machines running, all attached to a single NAT network (named vmnet8). I have configured VMWare's NAT (the vmnet-natd daemon) to forward some incoming to ports to one of the VMs, since it hosts some publicly accessible services. I did this via the file /etc/vmware/vmnet8/nat/nat.conf by adding lines like the following: 80 = 192.168.100.100:80 This works great, I can reach the web server on the VM at 192.168.100.100 by connecting to the host's IP address. Sometimes, I need to add port redirections to this NAT configuration. So, I add a line to the configuration file. Now for the question. How do I make the natd process take this new configuration into account? Clearly, restarting the host machine does take it into account, and the newly added port is forwarded. However, this is not an option on this server, so how should one do this without restarting the whole host? Thanks for any ideas!

    Read the article

  • Windows 2008 R2 AWS CloudFormation Elastic beanstalk configuration

    - by Webmonger
    I'm looking for some configuration advice. I have a need for a load balanced windows environment with shared media across all instances that are hosting the app. The best explanation i can give is that there will be multiple Windows 2008 server with IIS hosting the app going through an ELB to load balance. Users must be able to upload content (images, video etc...) to the site that will be hosted. When a user uploads media it needs to be kept on a shared location so all windows IIS instances can access the files, I can't host the files on S3 because of the app architecture so they need to be in a place where all IIS server will have access. In addition I need to run an update each IIS server instance that updates a local memory cache when SQL data is updated. I was thinking of a configuration like this: [ELB] - [Win 2008 IIS (multiple servers)] - [Win 2008 File & SQL Server(possibly RDS?)] Does this configuration make sense? If not could you provide an idea of how I should configure it. Thanks in advance

    Read the article

  • Apache server configuration name resolution (virtual host naming + security)

    - by Homunculus Reticulli
    I have just setup a minimal (hopefully secure? - comments welcome) apache website using the following configuration file: <VirtualHost *:80> ServerName foobar.com ServerAlias www.foobar.com ServerAdmin [email protected] DocumentRoot /path/to/websites/foobar/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.errors.log" <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I am able to access the website by using www.foobar.com, however when I type foobar.com, I get the error 'Server not found' - why is this? My second question concerns the security implications of the directive: <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> in the configuration above. What exactly is it doing, and is it necessary?. From my (admitedly limited) understanding of Apache configuration files, this means that anyone will be able to access (write to?) the /path/to/websites/ folder. Is my understanding correct? - and if yes, how is this not a security risk?

    Read the article

  • gwt maven war plugin configuration problem

    - by Din
    I am developing a gwt application in maven. In this I am using maven war plugin. Everything works fine. When I give mvn install command it builds abc.war file in target folder. But it is not copying compiled javascript files ("module1" and "module2" directories present in target) to war directory. I want to get newly compiled javascript files in war directory. How to achieve this? pom.xml file <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>example</groupId> <artifactId>example</artifactId> <packaging>war</packaging> <version>12</version> <name>gwt-maven-archetype-project</name> <properties> <!-- convenience to define GWT version in one place --> <gwt.version>2.1.0</gwt.version> <noServer>false</noServer> <skipTest>true</skipTest> <gwt.localWorkers>1</gwt.localWorkers> <JAVA_HOME>C:\Program Files\Java\jdk1.6.0_22</JAVA_HOME> <!-- convenience to define Spring version in one place --> </properties> <dependencies> <!-- Required dependencies--> </dependencies> <build> <finalName>abc</finalName> <outputDirectory>war/WEB-INF/classes</outputDirectory> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <verbose>true</verbose> <executable>${JAVA_HOME}\bin\java.exe</executable> <compilerVersion>1.6</compilerVersion> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <version>2.1.0</version> <executions> <execution> <goals> <goal>compile</goal> <goal>generateAsync</goal> <goal>mergewebxml</goal> <goal>test</goal> </goals> </execution> </executions> <configuration> <servicePattern>**/client/**/*Service.java</servicePattern> <noServer>${noServer}</noServer> <noserver>${noServer}</noserver> <modules> <module>com.abc.example.Module1</module> <module>com.abc.example.Module2</module> </modules> <runTarget>com.abc.example.Module1/module1.jsp</runTarget> <port>8080</port> <extraJvmArgs>-Xmx1024m -Xms1024m -Xss1024k -Dgwt.jjs.permutationWorkerFactory=com.google.gwt.dev.ThreadedPermutationWorkerFactory</extraJvmArgs> <hostedWebapp>war</hostedWebapp> <warSourceDirectory>${basedir}/war</warSourceDirectory> <webXml>${basedir}/war/WEB-INF/web.xml</webXml> </configuration> </plugin> <plugin> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <configuration> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.1-beta-1</version> <configuration> <warSourceDirectory>${basedir}/war</warSourceDirectory> <webXml>${basedir}/war/WEB-INF/web.xml</webXml> <!--<webXml>src/main/webapp/WEB-INF/web.xml</webXml>--> <containerConfigXML>war/WEB-INF/classes/context/context.xml</containerConfigXML> <warSourceExcludes>.gwt-tmp/**</warSourceExcludes> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <executions> <execution> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.4.2</version> <configuration> <argLine>-Xmx1024m</argLine> <skipTests>${skipTest}</skipTests> </configuration> </plugin> <plugin> <artifactId>maven-clean-plugin</artifactId> <version>2.2</version> <configuration> <filesets> <fileset> <directory>war/module1</directory> </fileset> <fileset> <directory>war/module2</directory> </fileset> <fileset> <directory>war/WEB-INF/lib</directory> </fileset> </filesets> </configuration> </plugin> </plugins> <resources> <resource> <directory>src/main/resources</directory> <excludes> <exclude>**/public/resources/**</exclude> <exclude>**/public/images/**</exclude> </excludes> <filtering>true</filtering> </resource> </resources> <filters> <filter>src/main/resources/build/build-${env}.properties</filter> </filters> </build> <profiles> <profile> <activation> <activeByDefault>true</activeByDefault> </activation> <id>dev</id> <properties> <env>dev</env> </properties> </profile> </profiles> <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> </plugin> </plugins> </reporting>

    Read the article

  • Joomla SMTP Configuration Issue

    - by msargenttrue
    I'm having an issue with the SMTP setup of my Joomla website when trying to send mass emails through the CB Mailing (Mass Email) extension. I receive this error: SMTP Error! The following recipients failed: Number of users to whom e-mail was sent: 0 (Total in list: 1) The old version of this websites mass emailer worked fine, however, in order to add Kunena Forum and maintain compatibility I had to make several upgrades to the site. Both the new version and old verson configurations are outlined below. Server for Website: Mac OS X Server 10.4.11, Apache 1.3.4.1, PHP 5.2.3, MySQL 4.1.22 Server for SMTP: Eudora Internet Mail Server 3.3.9 (EIMS Server X) New Configuration: Joomla 1.5.25, Community Builder 1.7.1, CB Paid Subscriptions (CB Subs) 1.2.2, CBMailing 2.3.4, Kunena Forum 1.7.0, Legacy 1.0 plug-in disabled Mail Settings (New Config): Mailer: SMTP Server Mail from: [email protected] From Name: CASPA Sendmail Path: /usr/sbin/sendmail SMTP Authentication: Yes SMTP Security: None SMTP Port: 25 SMTP Username: [email protected] SMTP Password: xxxxxxx SMTP Host: 209.48.40.194 Old Configuration (Working SMTP Configuration): Joomla 1.5.9, Community Builder 1.2, CB Paid Subscriptions (CB Subs) 1.0.3, CB Mailing 2.1, Legacy 1.0 plug-in enabled Mail Settings (Old Config): Mailer: SMTP Server Mail from: [email protected] From Name: CASPA Sendmail Path: /usr/sbin/sendmail SMTP Authentication: Yes SMTP Username: [email protected] SMTP Password: xxxxxxx SMTP Host: 209.48.40.194 (Notice how the older version of Joomla is missing the 2 fields: SMTP Security and SMTP Port) Thanks in advance!

    Read the article

  • hdfs configuration

    - by Ananymous
    I am a newbie. Trying to setup a hdfs system to serve my data (I don't plan to use mapreduce) at my lab. So far I have read, cluster setup in but I am still confused. Several questions: Do I need to have a secondary namenode? There are 2 files, masters and slaves. Do I really need these 2 files eventhough I just want hdfs? If I need them, what should go in there? I assume my namenode in masters and datanodes as slaves? Do I need slaves nodes What configuration files are needed for namenode, secondary namenode, datanode and client? (I assume core-site.xml is needed for all 4)? In addition, can someone suggest a good configuration model? sample configuration for namenode, secondary namenode, datanode, and the client would be very helpful. I am getting confused because it seems most of the documentation assumes I want to use map-reduce which isn't the case.

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • WebConfigurationManager error after adding siteMap

    - by aron
    Hello I'm getting this error: Compiler Error Message: CS0118: 'Configuration' is a 'namespace' but is used like a 'type' Configuration myWebConfig = WebConfigurationManager.OpenWebConfiguration("~/"); This code has been in place for 5+ months without this issues, only today after adding this sitemap code do I have this issue. <siteMap defaultProvider="ExtendedSiteMapProvider" enabled="true"> <providers> <clear/> <add name="ExtendedSiteMapProvider" type="Configuration.ExtendedSiteMapProvider" siteMapFile="Web.sitemap" securityTrimmingEnabled="true"/> </providers> </siteMap> I tried adding "System.Web." before the "Configuration ", but that did not work either: System.Web.Configuration myWebConfig = WebConfigurationManager.OpenWebConfiguration("~/"); Error 1 'System.Web.Configuration' is a 'namespace' but is used like a 'type'

    Read the article

  • Can't run command with sudo, even with the full path, I got an error

    - by Keating Wang
    the command starling is /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling when run starling, get the error Permission denied when run rvmsudo starling, works well when run sudo starling, get the error sudo: starling: command not found when run sudo /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling, get the error: /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find starling (>= 0) amongst [minitest-1.6.0, rake-0.8.7, rdoc-2.5.8] (Gem::LoadError) from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems.rb:1229:in gem' from /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling:18:in' I really want to run the command with sudo, because the error above is the same as running rvmsudo service starling start(I had set starling as a service of the os)

    Read the article

  • Linux - How to run Firefox with AT command

    - by conualfy
    I try to run a specified command on a desired time. I found at for this, and it seems to work fine if I run: echo "ls -al / > /home/florin/test.txt" | at 4:21am But I want to run a different thing: /usr/bin/firefox -new-tab http://google.ro I tried adapting the first line with my action (running it in terminal opens a new Firefox tab with http://google.ro, so the command is correct), but with at, it does not work: echo "firefox -new-tab http://google.ro" | at 4:23am The task seems to be scheduled, but it does not run. When running the previous line I get the default reply from at: warning: commands will be executed using /bin/sh Should my Firefox command be differently run in sh? Is there a way to do my action with at, or some other way? Thanks a lot!

    Read the article

  • Self-Configuring Classes W/ Command Line Args: Pattern or Anti-Pattern?

    - by dsimcha
    I've got a program where a lot of classes have really complicated configuration requirements. I've adopted the pattern of decentralizing the configuration and allowing each class to take and parse the command line/configuration file arguments in its c'tor and do whatever it needs with them. (These are very coarse-grained classes that are only instantiated a few times, so there is absolutely no performance issue here.) This avoids having to do shotgun surgery to plumb new options I add through all the levels they need to be passed through. It also avoids having to specify each configuration option in multiple places (where it's parsed and where it's used). What are some advantages/disadvantages of this style of programming? It seems to reduce separation of concerns in that every class is now doing configuration stuff, and to make programs less self-documenting because what parameters a class takes becomes less explicit. OTOH, it seems to increase encapsulation in that it makes each class more self-contained because no other part of the program needs to know exactly what configuration parameters a class might need.

    Read the article

  • task scheduler - run interactively as any user with admin credentials

    - by Force Flow
    I'm trying to deploy a scheduled task with a GPO. The task is set to run at login and executes a batch file, which then executes an EXE file. However, I also need it to be interactive and run with admin privledges to bypass the UAC prompt for a username and password when the exe file runs. I created the task for "Vista and later". I've tried running the task as mydoman\administrator and as NT AUTHORITY\Authenticated users with "run only when user is logged in" and "run with highest privledges" selected. If I log in as anyone but administrator, the task does run in the background, as I can see the cmd.exe process running in task manager as mydomain\administrator. Only if I log in as administrator do I then see the cmd window with the batch script running. How can I get the cmd window to display no matter which user logs in?

    Read the article

  • What should the memory configuration be?

    - by AngryHacker
    We have a server (ProLiant DL585 G1 by HP), which hosts Windows 2003 x64 R2 with SQL Server 2005 x64 and a host of other apps. It currently has 6GB of RAM. We are currently very memory constrained and it's clear that we need to get more memory. 8GB will probably do the trick, however, we are not sure as to what memory configuration will give us the biggest performance buck. Currently all 8 memory slots are filled (4 slots have 1GB chip, while the other 4 slots have 512MB chips). Should we throw the 512MB sticks away and just replace them all with 1GB sticks? If we decided to go with a higher memory configuration (e.g. 10GB or 12GB or 16GB), is it advisable to keep all the sticks of the same size or it does not matter? I was once told that interleaved memory requires (for better performance) that memory should be in multiples (e.g. 2 or 4 or 8 or 16, etc...). I am not even sure that the server has an interleaved configuration (and don't know how to find out), but is this true? Thanks.

    Read the article

  • Make Outlook run rules on non-inbox folders automatically

    - by tHeSiD
    I currently have my Outlook setup with Gmail. I have a couple of rules that I have defined which run on different folders (lables) in my account. I have filters already setup in GMail which will make emails skip the inbox and put them in the respective folders. Whenever I get a new email, in those folders, my rules are not run (they are just for setting categories). I have to run them manually. I think its because the emails don't come to the inbox first but directly into the folder. Is there anyway to make outlook run rules automatically on those folders? A scheduled run should also be fine.

    Read the article

  • File permissions to run mysqld in chroot

    - by Neo
    I'm trying to run mysqld inside chroot environment. Herez the situation. When I run mysqld as root, I can connect to my databases. But when I run mysql using init.d scripts, mysql gives me an error. $ mysql --user=root --password=password ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) So I guess, I need to change file permissions of some files. But which ones? Oh and in case you are wondering '/var/run/mysqld/mysqld.sock' is owned by 'mysql' user. EDIT: strace output looks something like this [pid 20599] <... select resumed> ) = 0 (Timeout) [pid 20599] time (NULL) = 12982215237 [pid 20599] select(0, NULL, NULL, NULL, {1, 0} <unfinished ...>

    Read the article

  • In need of a Smarter Environmental Package Configuration

    - by Jeremy Liberman
    I am trying to set up a package template in SSIS, following the Wrox Programmer to Programmer book, SQL Server 2008 Integration Services: Problem - Design - Solution. I'm really liking this book even though it is 2008 and we're using SQL Server 2005. I've got a working package template that uses an Indirect XML package configuration to identify what environment (local developer, dev, QA, production, etc) the package is being run in. That locates the SQL Server package configuration for the environment. That set-up is great and all except for the environment variable at the very front of it all. My team would prefer it if the package could use the same environment resource locator as all our other applications and tools use, so we don't two environment markers with essentially the same information in them. Normally we look up a registry key in HKey_Local_Machine but the Registry Package Configuration type only lets you look up the HKey_Current_User registries. My first thought was to write a new Package Configuration Type class that extends the Registry type; after all we'd had such luck writing our own custom log provider. SSIS is super extendable, right? So there doesn't seem to be a way to write your own Package Configuration Types. Is there still some way I can configure my SSIS SQL Server package configuration from a HKLM registry key connection string? If this is not possible, what other workarounds are available? My idea is to write a PowerShell script that will create/modify the Environment Variable that the package will use by fetching the connection string from the registry. This way there's still two markers, but at least then it's automatically maintained and automated. Is this kind of workaround necessary? Thank you for your time.

    Read the article

  • Fluent NHibernate ExportSchema without connexion string

    - by Vince
    Hi all, I want to propose to user a way to generate database table script creation. To do this for now i use NHibernate ExportSchema bases on a NHibernate configuration generated with Fluent NHibernate this way (during my ISessionFactory creation method): FluentConfiguration configuration = Fluently.Configure(); ... Mapping conf ... configuration.Database(fluentDatabaseProvider); this.nhibernateConfiguration = configuration.BuildConfiguration(); returnSF = configuration.BuildSessionFactory(); ... Later new SchemaExport(this.nhibernateConfiguration) .SetOutputFile(filePath) .Execute(false, false, false); fluentDatabaseProvider is a FluentNHibernate IPersistenceConfigurer which is needed to get proper sql dialect for database creation. When factory is created with an existing database, everything works fine. But what i want to do is to create an NHibernate Configuration object on a selected database engine without a real database behind the scene... And i don't manage to do this. If anybody has some idea.

    Read the article

  • Oracle Enterprise Manager content at Collaborate 12 - the only user-driven and user-run Oracle conference

    - by Anand Akela
    From April 22-26, 2012, Oracle takes Las Vegas. Thousands of Oracle professionals will descend upon the Mandalay Bay Convention Center for a weeks worth of education sessions, networking opportunities and more, at the only user-driven and user-run Oracle conference - COLLABORATE 12. This is one of the best opportunities for you to learn more about Oracle technology including Oracle Enterprise Manager. Here is a summary of an impressive line-up of Oracle Enterprise Manager related content at COLLABORATE 12. Customer Presentations Stability in Real World with SQL Plan Management Upgrading to Oracle Enterprise Manager 12c - Best Practices Making OEM Sing and Dance with EMCLI Oracle Real Application Testing: A look under the hood Optimizing Oracle E-Business Suite on Exadata Experiences with OracleVM 3 and Grid Control in an Oracle BIEE environment. Right Cloud-- How to Avoid the False Cloud by using Oracle Technologies Forgetting something? Standarize your database monitoring environment with Enterprise Manager 11g Implementing E-Business Suite R12 in a Federal Cloud - Lessons Learned Cloud Computing Boot Camp: New DBA Features in Oracle Enterprise Manager Cloud Control 12c Oracle Enterprise Manager 12c, Whats Changed, Whats New? Monitoring a WebCenter Content Deployment with Enterprise Manager Enterprise Manager 12c Cloud Control: New Features and Best Practices (for IOUG registrants only) Oracle Presentations Roadmap Session: Total Cloud Control with Oracle Enterprise Manager 12c Real World Performance (complimentary for IOUG registrants only) Database-as-a-Service: Enterprise Cloud in Three Simple Steps Bullet-proof Your Enterprise, SOA & Cloud Investments Using Oracle Enterprise Gateway What’s New for Oracle WebLogic Management: Capabilities that Scripting Cannot Provide Exadata Boot Camp: Complete Oracle Exadata Management with Oracle Enterprise Manager Stay connected with  Oracle Enterprise Manager   :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Should a developer create test cases and then run through test cases

    - by Eben Roux
    I work for a company where the development manager expects a developer to create test cases before writing any code. These test cases have to then be maintained by the developers. Every-so-often a developer will be expected to run through the test cases. From this you should be able to gather that the company in question is rather small and there are no testers. Coming from a Software Architect position and having to write / execute test cases wearing my 'tester' hat is somewhat of a shock to the system. I do it anyway but it does seem to be a rather expensive exercise :) EDIT: I seem to need to elaborate here: I am not talking about unit-testing, TDD, etc. :) I am talking about that bit of testing a tester does. Once I have developed a system (with my unit tests / tdd / etc.) the software goes through a testing phase. Should a developer be that tester and developer those test cases? I think the misunderstanding may stem from the fact that developers, typically, are not involved with this type of testing and, therefore, assumed I am referring to that testing we do do: unit testing. But alas, no. I hope that clears it up.

    Read the article

  • How to Run Pam Face Authentication

    - by Supriyo Banerjee
    I am using Ubuntu 11.10. I went to the following URL to download the software 'Pam Face Authentication': http://ppa.launchpad.net/antonio.chiurazzi/ppa/ubuntu/pool/main/p/pam-face-authentication/ and downloaded the version for natty narhwall. I installed the software using the following commands: sudo apt-get install build-essential cmake qt4-qmake libx11-dev libcv-dev libcvaux-dev libhighgui2.1 libhighgui-dev libqt4-dev libpam0g-dev checkinstall cd /tmp && wget http://pam-face-authentication.googlecode.com /files/pam-face-authentication-0.3.tar.gz sudo add-apt-repository ppa:antonio.chiurazzi sudo apt-get update sudo apt-get install pam-face-authentication cat << EOF | sudo tee /usr/share/pam-configs/face_authentication /dev/null Name: face_authentication profile Default: yes Priority: 900 Auth-Type: Primary Auth: [success=end default=ignore] pam_face_authentication.so enableX EOF sudo pam-auth-update --package face_authentication The software installed and I can run the qt-facetrainer. But the problem is when I restarted my system, I saw that the default login screen is appearing where I should put my password to login. The webcam is not starting at all. And I cannot login with my face. Which means I think that pam face authentication programme is not starting at all. Please let me know how I can login with my face using pam face authentication programme.

    Read the article

  • Unable to configure a service to run at startup with update-rc.d

    - by ujjain
    I would like to have transmission-daemon and vnstat automatically run at startup. I was able to configure this for apache2 and proftpd with exactly the same commands. 795 sudo update-rc.d transmission-daemon remove 797 sudo update-rc.d -f transmission-daemon remove 798 sudo update-rc.d transmission-daemon defaults 799 sudo update-rc.d vnstat remove 800 sudo update-rc.d -f vnstat remove 801 sudo update-rc.d -f vnstat defaults 802 sudo update-rc.d -f vnstat enable 805 reboot 807 history root@htpc:/home/administrator# sudo update-rc.d -f transmission-daemon remove Removing any system startup links for /etc/init.d/transmission-daemon ... /etc/rc0.d/K20transmission-daemon /etc/rc1.d/K20transmission-daemon /etc/rc2.d/S20transmission-daemon /etc/rc3.d/S20transmission-daemon /etc/rc4.d/S20transmission-daemon /etc/rc5.d/S20transmission-daemon /etc/rc6.d/K20transmission-daemon root@htpc:/home/administrator# sudo update-rc.d -f transmission-daemon defaults Adding system startup for /etc/init.d/transmission-daemon ... /etc/rc0.d/K20transmission-daemon -> ../init.d/transmission-daemon /etc/rc1.d/K20transmission-daemon -> ../init.d/transmission-daemon /etc/rc6.d/K20transmission-daemon -> ../init.d/transmission-daemon /etc/rc2.d/S20transmission-daemon -> ../init.d/transmission-daemon /etc/rc3.d/S20transmission-daemon -> ../init.d/transmission-daemon /etc/rc4.d/S20transmission-daemon -> ../init.d/transmission-daemon /etc/rc5.d/S20transmission-daemon -> ../init.d/transmission-daemon root@htpc:/home/administrator# service vnstat status * vnStat daemon is not running root@htpc:/home/administrator# service transmission-daemon status * transmission-daemon is not running root@htpc:/home/administrator# service transmission-daemon start * Starting bittorrent daemon transmission-daemon [ OK ] root@htpc:/home/administrator# service vnstat start * Starting vnStat daemon vnstatd [ OK ] root@htpc:/home/administrator# service apache2 status Apache2 is running (pid 1137). root@htpc:/home/administrator#

    Read the article

  • Jump and run HTML5 Game Framework

    - by user1818924
    We're developing a jump and run game with HTML5 and JavaScript and have to build an own game framework for this. Here we have some difficulties and would like to ask you for some advice: we have a "Stage" object, which represents the root of our game and is a global div-wrapper. The stage can contain multiple "Scenes", which are also div-elements. We would implement a Scene for the playing task, for pause, etc. and switch between them. Each scene can therefore contain multiple "Layers", representing a canvas. These Layer contain "ObjectEntities", which represent images or other shapes like rectangles, etc. Each Objectentity has its own temporaryCanvas, to be able to draw images for one entity, whereas another contains a rectangle. We set an activeScene in our Stage, so when the game is played, just the active scene is drawn. Calling activeScene.draw(), calls all sublayers to draw, which draw their entities (calling drawImage(entity.canvas)). But is this some kind of good practive? Having multiple canvas to draw? Each gameloop every layer-context is cleared and drawn again. E.g. we just have a still Background-Layer, … wouldn't it be more useful to draw this once and not to clear it everytime and redraw it? Or should we use a global canvas for example in the Stage and just use this canvas to draw? But we thought this would be to expensive... Other question: Do you have any advice how we could dive into implementing an own framework? Most stuff we find online relies on existing frameworks or they just implement their game without building a framework.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >