Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 23/609 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • getting Internet connection sharing working in a slightly more complicated configuration

    - by tirichitirca t
    I have the following configuration: Computer A - Mac OSX 10.8.4, wireless & wired adapters Computer B - Windows 7 (64 bit), wireless & wired adapters, has internet connection via the wired adapter (ethernet) d-link wired/wireless router. Problem to solve: Connect from computer A to the internet through the wired connection of computer B. I tried the following: I set up a local network between A and B using the d-link router. The configuration is this: D-link router - 192.168.0.1 A - wired connection to the d-link router, static 192.168.0.101 (I could have used the wireless but I preferred the wired connection) B - wireless connection to the d-link router DHCP 192.168.0.102 (but I made sure it always gets the same address) B - wired connection to the internet using some address that begins with 10.x.y.z. In this configuration A can see B. I enabled ICS on the wired adapter of B. I set up the Gateway of A to point to B and DNS servers to point to the DNS servers specified for the 10.x.y.z address. It doesn't work, A goes only as far as B. It can ping the 10.x.y.z address of B though. I then found this article: http://terrybritton.com/windows-internet-connection-sharing-ics-not-working-with-linux-bridging-is-the-solution-916/. Terry is suggesting that a bridge should be defined on B between the two connections. I tried that but basically computer B is screwed as soon as I create the bridge. It can't connect to the internet anymore. It is as if the network bridge seems to think the traffic to the internet should go from the wired connection to the wireless and not the other way around. The other thing that puzzles me is the router itself. In general the router needs an internet address. In a normal configuration it is the router that gets the ip address and the internet traffic goes through the router. In my case I am not interested in that. So, any suggestions to get this working? I wouldn't shy away from using a commercial software but I would think windows 7 should allow me to do it. Thanks

    Read the article

  • Deny to administrators to change network configuration settings

    - by moronrats
    I need to provide admin rights to every user but the users should not able to change network configuration settings. For this I have enabled following policies in User Configuration\Administrative Templates\Network\Network connections Enable Windows 2000 network connection settings for administrators Prohibit access to properties of a LAN connection Prohibit access to properties of components a LAN connection Users (that exist in administrators) still can change the LAN properties. Are there any other solutions?

    Read the article

  • Unable to open the Performance Logs and Alerts configuration

    - by davidhayes
    Hi, I'm trying to set up some perfmon logging on our server and I get this message in the event log "Unable to open the Performance Logs and Alerts configuration. This configuration is initialized when you use the Performance Logs and Alerts Management Console snapin to create a Log or Alert session." Any ideas? Googling hasn't turned up anything useful so far Thanks Dave

    Read the article

  • squid cache disk configuration

    - by Gogonez
    just wondering how far drive configuration will affect squid cache performance. what kind of drive configuration that fast enough for squid ? is it true that block-level parity strip raid faster than byte-level one ? is mirrored drive config will decrease squid cache write process ? how much swap space that squid realy need to store cache (reverse mode) for 200mb web doc ? what kind of benchmark should i do to analyze squid disk performance ?

    Read the article

  • ZeroDowntime deployment of configuration in Tomcat 7

    - by pagid
    looking at the things which can be done with the Parallel deployments in Tomcat 7, I wonder how new or changed configuration could be provided to these various versions of the application. In a nutshell - what parallel deployment offers is that pushing a new version of a war file to the webapps dir (with filenames like "App##01.war, "App##02.war") and ever user with a new session will get the newer version, all others stay with the old version. So how could one provide different or additional configuration (properties) to the various versions? Cheers.

    Read the article

  • Dell "Remote Access Configuration Utility" keeps prompting

    - by Dan
    One of our servers can never reboot without pausing at the BIOS prompt asking to "F1 to continue, F2 to enter setup utility". I have gone into Setup and there is nothing there to stop it prompting for this; I have gone into the Remote Access Configuration Utility (CTRL+E) and have setup some values hoping that because it was setup it would not keep asking, but nope, nothing obvious like "Disable Remote Access Configuration". This is the screen we see: Does anyone know what we can do to let our machine boot cleanly??

    Read the article

  • Problem in reading configuration file from Class library project

    - by Newbie
    If I create an app.config file in a console apps like this <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key ="key1" value ="val1"/> </appSettings> </configuration> and access the same from the console application like object sourcePath = System.Configuration.ConfigurationManager.AppSettings["key1"]; or by object sourcePath = System.Configuration.ConfigurationSettings.AppSettings["key1"]; I am able to get the value. But if I do the same thing in a class library project, I am getting a null value. Why? Where I am making mistake? I have added the proper reference System.Configuration. I am using C# 3.0

    Read the article

  • Nginx dynamic upstream configuration / routing

    - by Dan Sosedoff
    I was experimenting with dynamic upstream configuration for nginx and cant find any good solution to implement upstream configuration from third-party source like redis or mysql. The idea behind it is to have a single file configuration in primary server and proxy requests to various app servers based on environment conditions. Think of dynamic deployments where you have X servers that are running Y workers on different ports. For instance, i create a new app and deploy. App manager selects a server and then rolls out a worker (Ruby/PHP/Python) and then reports the ip:port to the central database with status "up". At this time when i go to the given url nginx should proxy all requests to the specified ip:port upstream. The whole thing is pretty similar to what heroku does, except this proof-of-concept is not supposed to be production ready, mostly for internal needs. The easiest solution i found was using resolver with ruby-based DNS server. It works, nginx gets the IP address correctly, but the only problem is that you cant define port number for that IP. Second solution (which i havent tried yet) is to roll something else as a proxy server, maybe written in Erlang. In this case we need to use something to serve static content. Any ideas how to implement this in more flexible and stable way? P.S. Some research options: http://openresty.org/#DynamicRoutingBasedOnRedis https://github.com/nodejitsu/node-http-proxy

    Read the article

  • MultiPath configuration on RHEL5 and Clariion CX-300

    - by Kamil Z
    I have problem with discovering my FC-connected CX-300 storage. Frankly speaking I'm complete novice in FibreChannel, so step by step explanation would be appreciated. My configuration consist of two IBM HS20 blades with RHEL5.4 on board and 2x Qlogic ISP2422-based 4Gb Fibre Channel HBAs on each blade. As a FC switch there are two Brocades built in BladeCenter Chassis, and finally there is EMC Clariion CX-300. CX300, and Brocade switches should be configured properly, because they were working fine with previous configuration, which main defference was RHEL3 instead RHEL5.4 Below there is my output from several usefull commands: #lspci | grep Fibre 06:01.0 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) 06:01.1 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) #lsmod | grep qla qla2xxx 1084741 0 scsi_transport_fc 37577 1 qla2xxx scsi_mod 141717 10 scsi_dh,qla2xxx,sg,scsi_transport_fc,usb_storage,libata,mptspi,mptscsih,scsi_transport_spi,sd_mod #cat /proc/scsi/scsi Attached Devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: LSILOGIC Model: 1030 IM IM Rev: 1000 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 I'd followed instructions from this site (editing /etc/multipath.conf), but i failed after multipath -ll - the output was empty. Do you have any suggestions about discovering FC Connected LUNs in such configuration?

    Read the article

  • DCOM configuration: accounts with same name but different passwords problem

    - by archimed7592
    Hello, everybody! I'm experiencing troubles with DCOM configuration. Here is the case: I'm using some product which supports client-server interaction through DCOM, but the client won't get any access to the server if the attempt is being done from an account with a name which exists at the server as well, but has different password. Basically, if we try to access the server from the Administrator account which obviously present on the server machine, we will fail if client's Administrator password doesn't match server's one. After actively collaborating with the product's developer in attempts to localize the issue, he come across with resolution "can't be fixed" or, if you prefer to call a pikestaff a pikestaff than it's more likely a "don't know how to fix" resolution :). I believe there is a solution for this problem and I'm asking you, IT professionals, to help me out with this one. I do realize that the problem may be caused by the way the developer interact with DCOM and if so it can't be fixed be means of pure system configuration and the question should be asked at SO, but since I've bumped into the same behavior while working with file/printer sharing - Windows tried to simplify everything and used currently impersonated credentials to access the share, I hope the solution lies at system configuration layer. P.S. I believe that the actual software product I'm talking about is entirely irrelevant however my experience tell me that there always would be somebody who will think that it on the contrary is very relevant. Here it is: SpRecord.

    Read the article

  • Reloading NAT configuration on a running VMWare Server 2.0.2

    - by Jonathan Clarke
    I have a server running VMWare Server 2.0.2. The host is Debian Lenny. I have 15-20 virtual machines running, all attached to a single NAT network (named vmnet8). I have configured VMWare's NAT (the vmnet-natd daemon) to forward some incoming to ports to one of the VMs, since it hosts some publicly accessible services. I did this via the file /etc/vmware/vmnet8/nat/nat.conf by adding lines like the following: 80 = 192.168.100.100:80 This works great, I can reach the web server on the VM at 192.168.100.100 by connecting to the host's IP address. Sometimes, I need to add port redirections to this NAT configuration. So, I add a line to the configuration file. Now for the question. How do I make the natd process take this new configuration into account? Clearly, restarting the host machine does take it into account, and the newly added port is forwarded. However, this is not an option on this server, so how should one do this without restarting the whole host? Thanks for any ideas!

    Read the article

  • Windows 2008 R2 AWS CloudFormation Elastic beanstalk configuration

    - by Webmonger
    I'm looking for some configuration advice. I have a need for a load balanced windows environment with shared media across all instances that are hosting the app. The best explanation i can give is that there will be multiple Windows 2008 server with IIS hosting the app going through an ELB to load balance. Users must be able to upload content (images, video etc...) to the site that will be hosted. When a user uploads media it needs to be kept on a shared location so all windows IIS instances can access the files, I can't host the files on S3 because of the app architecture so they need to be in a place where all IIS server will have access. In addition I need to run an update each IIS server instance that updates a local memory cache when SQL data is updated. I was thinking of a configuration like this: [ELB] - [Win 2008 IIS (multiple servers)] - [Win 2008 File & SQL Server(possibly RDS?)] Does this configuration make sense? If not could you provide an idea of how I should configure it. Thanks in advance

    Read the article

  • Apache server configuration name resolution (virtual host naming + security)

    - by Homunculus Reticulli
    I have just setup a minimal (hopefully secure? - comments welcome) apache website using the following configuration file: <VirtualHost *:80> ServerName foobar.com ServerAlias www.foobar.com ServerAdmin [email protected] DocumentRoot /path/to/websites/foobar/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.errors.log" <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I am able to access the website by using www.foobar.com, however when I type foobar.com, I get the error 'Server not found' - why is this? My second question concerns the security implications of the directive: <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> in the configuration above. What exactly is it doing, and is it necessary?. From my (admitedly limited) understanding of Apache configuration files, this means that anyone will be able to access (write to?) the /path/to/websites/ folder. Is my understanding correct? - and if yes, how is this not a security risk?

    Read the article

  • gwt maven war plugin configuration problem

    - by Din
    I am developing a gwt application in maven. In this I am using maven war plugin. Everything works fine. When I give mvn install command it builds abc.war file in target folder. But it is not copying compiled javascript files ("module1" and "module2" directories present in target) to war directory. I want to get newly compiled javascript files in war directory. How to achieve this? pom.xml file <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>example</groupId> <artifactId>example</artifactId> <packaging>war</packaging> <version>12</version> <name>gwt-maven-archetype-project</name> <properties> <!-- convenience to define GWT version in one place --> <gwt.version>2.1.0</gwt.version> <noServer>false</noServer> <skipTest>true</skipTest> <gwt.localWorkers>1</gwt.localWorkers> <JAVA_HOME>C:\Program Files\Java\jdk1.6.0_22</JAVA_HOME> <!-- convenience to define Spring version in one place --> </properties> <dependencies> <!-- Required dependencies--> </dependencies> <build> <finalName>abc</finalName> <outputDirectory>war/WEB-INF/classes</outputDirectory> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <verbose>true</verbose> <executable>${JAVA_HOME}\bin\java.exe</executable> <compilerVersion>1.6</compilerVersion> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <version>2.1.0</version> <executions> <execution> <goals> <goal>compile</goal> <goal>generateAsync</goal> <goal>mergewebxml</goal> <goal>test</goal> </goals> </execution> </executions> <configuration> <servicePattern>**/client/**/*Service.java</servicePattern> <noServer>${noServer}</noServer> <noserver>${noServer}</noserver> <modules> <module>com.abc.example.Module1</module> <module>com.abc.example.Module2</module> </modules> <runTarget>com.abc.example.Module1/module1.jsp</runTarget> <port>8080</port> <extraJvmArgs>-Xmx1024m -Xms1024m -Xss1024k -Dgwt.jjs.permutationWorkerFactory=com.google.gwt.dev.ThreadedPermutationWorkerFactory</extraJvmArgs> <hostedWebapp>war</hostedWebapp> <warSourceDirectory>${basedir}/war</warSourceDirectory> <webXml>${basedir}/war/WEB-INF/web.xml</webXml> </configuration> </plugin> <plugin> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <configuration> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.1-beta-1</version> <configuration> <warSourceDirectory>${basedir}/war</warSourceDirectory> <webXml>${basedir}/war/WEB-INF/web.xml</webXml> <!--<webXml>src/main/webapp/WEB-INF/web.xml</webXml>--> <containerConfigXML>war/WEB-INF/classes/context/context.xml</containerConfigXML> <warSourceExcludes>.gwt-tmp/**</warSourceExcludes> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <executions> <execution> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.4.2</version> <configuration> <argLine>-Xmx1024m</argLine> <skipTests>${skipTest}</skipTests> </configuration> </plugin> <plugin> <artifactId>maven-clean-plugin</artifactId> <version>2.2</version> <configuration> <filesets> <fileset> <directory>war/module1</directory> </fileset> <fileset> <directory>war/module2</directory> </fileset> <fileset> <directory>war/WEB-INF/lib</directory> </fileset> </filesets> </configuration> </plugin> </plugins> <resources> <resource> <directory>src/main/resources</directory> <excludes> <exclude>**/public/resources/**</exclude> <exclude>**/public/images/**</exclude> </excludes> <filtering>true</filtering> </resource> </resources> <filters> <filter>src/main/resources/build/build-${env}.properties</filter> </filters> </build> <profiles> <profile> <activation> <activeByDefault>true</activeByDefault> </activation> <id>dev</id> <properties> <env>dev</env> </properties> </profile> </profiles> <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> </plugin> </plugins> </reporting>

    Read the article

  • Joomla SMTP Configuration Issue

    - by msargenttrue
    I'm having an issue with the SMTP setup of my Joomla website when trying to send mass emails through the CB Mailing (Mass Email) extension. I receive this error: SMTP Error! The following recipients failed: Number of users to whom e-mail was sent: 0 (Total in list: 1) The old version of this websites mass emailer worked fine, however, in order to add Kunena Forum and maintain compatibility I had to make several upgrades to the site. Both the new version and old verson configurations are outlined below. Server for Website: Mac OS X Server 10.4.11, Apache 1.3.4.1, PHP 5.2.3, MySQL 4.1.22 Server for SMTP: Eudora Internet Mail Server 3.3.9 (EIMS Server X) New Configuration: Joomla 1.5.25, Community Builder 1.7.1, CB Paid Subscriptions (CB Subs) 1.2.2, CBMailing 2.3.4, Kunena Forum 1.7.0, Legacy 1.0 plug-in disabled Mail Settings (New Config): Mailer: SMTP Server Mail from: [email protected] From Name: CASPA Sendmail Path: /usr/sbin/sendmail SMTP Authentication: Yes SMTP Security: None SMTP Port: 25 SMTP Username: [email protected] SMTP Password: xxxxxxx SMTP Host: 209.48.40.194 Old Configuration (Working SMTP Configuration): Joomla 1.5.9, Community Builder 1.2, CB Paid Subscriptions (CB Subs) 1.0.3, CB Mailing 2.1, Legacy 1.0 plug-in enabled Mail Settings (Old Config): Mailer: SMTP Server Mail from: [email protected] From Name: CASPA Sendmail Path: /usr/sbin/sendmail SMTP Authentication: Yes SMTP Username: [email protected] SMTP Password: xxxxxxx SMTP Host: 209.48.40.194 (Notice how the older version of Joomla is missing the 2 fields: SMTP Security and SMTP Port) Thanks in advance!

    Read the article

  • WebConfigurationManager error after adding siteMap

    - by aron
    Hello I'm getting this error: Compiler Error Message: CS0118: 'Configuration' is a 'namespace' but is used like a 'type' Configuration myWebConfig = WebConfigurationManager.OpenWebConfiguration("~/"); This code has been in place for 5+ months without this issues, only today after adding this sitemap code do I have this issue. <siteMap defaultProvider="ExtendedSiteMapProvider" enabled="true"> <providers> <clear/> <add name="ExtendedSiteMapProvider" type="Configuration.ExtendedSiteMapProvider" siteMapFile="Web.sitemap" securityTrimmingEnabled="true"/> </providers> </siteMap> I tried adding "System.Web." before the "Configuration ", but that did not work either: System.Web.Configuration myWebConfig = WebConfigurationManager.OpenWebConfiguration("~/"); Error 1 'System.Web.Configuration' is a 'namespace' but is used like a 'type'

    Read the article

  • hdfs configuration

    - by Ananymous
    I am a newbie. Trying to setup a hdfs system to serve my data (I don't plan to use mapreduce) at my lab. So far I have read, cluster setup in but I am still confused. Several questions: Do I need to have a secondary namenode? There are 2 files, masters and slaves. Do I really need these 2 files eventhough I just want hdfs? If I need them, what should go in there? I assume my namenode in masters and datanodes as slaves? Do I need slaves nodes What configuration files are needed for namenode, secondary namenode, datanode and client? (I assume core-site.xml is needed for all 4)? In addition, can someone suggest a good configuration model? sample configuration for namenode, secondary namenode, datanode, and the client would be very helpful. I am getting confused because it seems most of the documentation assumes I want to use map-reduce which isn't the case.

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • Self-Configuring Classes W/ Command Line Args: Pattern or Anti-Pattern?

    - by dsimcha
    I've got a program where a lot of classes have really complicated configuration requirements. I've adopted the pattern of decentralizing the configuration and allowing each class to take and parse the command line/configuration file arguments in its c'tor and do whatever it needs with them. (These are very coarse-grained classes that are only instantiated a few times, so there is absolutely no performance issue here.) This avoids having to do shotgun surgery to plumb new options I add through all the levels they need to be passed through. It also avoids having to specify each configuration option in multiple places (where it's parsed and where it's used). What are some advantages/disadvantages of this style of programming? It seems to reduce separation of concerns in that every class is now doing configuration stuff, and to make programs less self-documenting because what parameters a class takes becomes less explicit. OTOH, it seems to increase encapsulation in that it makes each class more self-contained because no other part of the program needs to know exactly what configuration parameters a class might need.

    Read the article

  • What should the memory configuration be?

    - by AngryHacker
    We have a server (ProLiant DL585 G1 by HP), which hosts Windows 2003 x64 R2 with SQL Server 2005 x64 and a host of other apps. It currently has 6GB of RAM. We are currently very memory constrained and it's clear that we need to get more memory. 8GB will probably do the trick, however, we are not sure as to what memory configuration will give us the biggest performance buck. Currently all 8 memory slots are filled (4 slots have 1GB chip, while the other 4 slots have 512MB chips). Should we throw the 512MB sticks away and just replace them all with 1GB sticks? If we decided to go with a higher memory configuration (e.g. 10GB or 12GB or 16GB), is it advisable to keep all the sticks of the same size or it does not matter? I was once told that interleaved memory requires (for better performance) that memory should be in multiples (e.g. 2 or 4 or 8 or 16, etc...). I am not even sure that the server has an interleaved configuration (and don't know how to find out), but is this true? Thanks.

    Read the article

  • In need of a Smarter Environmental Package Configuration

    - by Jeremy Liberman
    I am trying to set up a package template in SSIS, following the Wrox Programmer to Programmer book, SQL Server 2008 Integration Services: Problem - Design - Solution. I'm really liking this book even though it is 2008 and we're using SQL Server 2005. I've got a working package template that uses an Indirect XML package configuration to identify what environment (local developer, dev, QA, production, etc) the package is being run in. That locates the SQL Server package configuration for the environment. That set-up is great and all except for the environment variable at the very front of it all. My team would prefer it if the package could use the same environment resource locator as all our other applications and tools use, so we don't two environment markers with essentially the same information in them. Normally we look up a registry key in HKey_Local_Machine but the Registry Package Configuration type only lets you look up the HKey_Current_User registries. My first thought was to write a new Package Configuration Type class that extends the Registry type; after all we'd had such luck writing our own custom log provider. SSIS is super extendable, right? So there doesn't seem to be a way to write your own Package Configuration Types. Is there still some way I can configure my SSIS SQL Server package configuration from a HKLM registry key connection string? If this is not possible, what other workarounds are available? My idea is to write a PowerShell script that will create/modify the Environment Variable that the package will use by fetching the connection string from the registry. This way there's still two markers, but at least then it's automatically maintained and automated. Is this kind of workaround necessary? Thank you for your time.

    Read the article

  • Fluent NHibernate ExportSchema without connexion string

    - by Vince
    Hi all, I want to propose to user a way to generate database table script creation. To do this for now i use NHibernate ExportSchema bases on a NHibernate configuration generated with Fluent NHibernate this way (during my ISessionFactory creation method): FluentConfiguration configuration = Fluently.Configure(); ... Mapping conf ... configuration.Database(fluentDatabaseProvider); this.nhibernateConfiguration = configuration.BuildConfiguration(); returnSF = configuration.BuildSessionFactory(); ... Later new SchemaExport(this.nhibernateConfiguration) .SetOutputFile(filePath) .Execute(false, false, false); fluentDatabaseProvider is a FluentNHibernate IPersistenceConfigurer which is needed to get proper sql dialect for database creation. When factory is created with an existing database, everything works fine. But what i want to do is to create an NHibernate Configuration object on a selected database engine without a real database behind the scene... And i don't manage to do this. If anybody has some idea.

    Read the article

  • Drupal & nginx : a sound "general purpose" configuration?

    - by sbrattla
    After a bit back and forth with configuring Drupal and nginx to work together, I've come up with the below configuration for a site. It works well, both with private and public file systems. However, as I am fairly new to nginx I'd like to hear if there is something with this configuration that I should change (for Please note! I'm aiming towards getting feedback on a general purpose Drupal configuration. That is, a configuration which others who are trying out Drupal + nginx can "copy paste" to get up and running. server { listen 80; server_name www.example.* example.*; access_log /home/example/www/logs/access.log; error_log /home/example/www/logs/error.log; root /home/example/www/public_html; # Site Icon location = /favicon.ico { log_not_found off; access_log off; } # Search Engines location = /robots.txt { allow all; log_not_found off; access_log off; } # Drush location = /backup { deny all; } # Very rarely should these ever be accessed from outside # the local network. location ~* \.(txt|log)$ { allow 10.0.0.0/8; allow 172.16.0.0/12; allow 192.168.0.0/16; deny all; } location ~ \..*/.*\.php$ { return 403; } # Default location location / { try_files $uri @rewrite; } # Files managed by Drupal will be served via PHP. location ~* /system/files/ { access_log off; try_files $uri @rewrite; } ## Images and static content is treated different location ~* \.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; } # Some Drupal modules enforce no slash (/) at the end # of the URL. location @rewrite { rewrite_log on; rewrite ^/(.*)$ /index.php?q=$1; } # PHP5-FPM is used to handle PHP. location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass unix:/var/run/example.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } }

    Read the article

  • Configuring CESoPSN using Cisco MWR 2941

    - by Rayne
    I'm trying to configure CESoPSN on two Cisco MWR 2941 routers, but the alarm LED lights are always lit. My configuration is modeled after this sample configuration. My setup is as follows: On the Cisco MWRs, E1 0/5 is configured to be CESoPSN, E1 0/9 is configured to be CESoPSN (CAS mode), and E1 0/7 is configured to be SAToP. The two MWRs are connected to each other via the GigabitEthernet port 0/2. The GigE ports are configured as a vlan because the ports are L2 ports and cannot be assigned an IP address directly. The two Cisco MWRs are connected to a traffic simulator, i.e. the traffic simulator will play out E1 traffic to MWR 1 and record the output traffic from MWR 2. On my traffic simulator, when it's connected to the E1 ports 0/5 and 0/9 (both CESoPSN configurations), the "Remote" alarm is on. However, when connected to the E1 ports 0/7 (SAToP configuration), no alarms were on. The GigE connection seems to be working fine (both LED lights on the 2 ports are green). The SAToP configuration seems to be fine too (Left LED is green, right LED is off on both E1 0/7 ports). However, both CESoPSN configurations seem to be not working (Left LED is green, right LED is yellow on both E1 0/5 and 0/9 ports). I don't know if there's anything wrong with my configuration for the CESoPSN, as I'm very new to this. The relevant portions of the configuration are as follows: MWR 1: controller E1 0/5 clock source internal cem-group 5 timeslots 1-31 description E1 CESoPSN example ! controller E1 0/7 clock source internal cem-group 7 unframed description E1 SATOP example ! controller E1 0/9 mode cas clock source internal cem-group 9 timeslots 1-24 description E1 CESoPSN CAS example ! interface Loopback0 ip address 30.30.30.1 255.255.255.255 ! interface GigabitEthernet0/2 switchport access vlan 100 mpls ip ! interface CEM0/5 no ip address cem 5 xconnect 30.30.30.2 305 encapsulation mpls ! ! interface CEM0/7 no ip address cem 7 xconnect 30.30.30.2 307 encapsulation mpls ! ! interface CEM0/9 no ip address cem 9 signaling inband-cas xconnect 30.30.30.2 309 encapsulation mpls ! ! interface Vlan100 ip address 50.50.50.1 255.255.255.0 no ptp enable mpls ip ! no ip classless ip forward-protocol nd ip route 30.30.30.2 255.255.255.255 50.50.50.2 ! MWR 2: controller E1 0/5 clock source internal cem-group 5 timeslots 1-31 description E1 CESoPSN example ! controller E1 0/7 clock source internal cem-group 7 unframed ! controller E1 0/9 mode cas clock source internal cem-group 9 timeslots 1-24 description E1 CESoPSN CAS example ! interface Loopback0 ip address 30.30.30.2 255.255.255.255 ! interface GigabitEthernet0/2 switchport access vlan 100 mpls ip ! interface CEM0/5 no ip address cem 5 xconnect 30.30.30.1 305 encapsulation mpls ! ! interface CEM0/7 no ip address cem 7 xconnect 30.30.30.1 307 encapsulation mpls ! ! interface CEM0/9 no ip address cem 9 signaling inband-cas xconnect 30.30.30.1 309 encapsulation mpls ! ! interface Vlan100 ip address 50.50.50.2 255.255.255.0 no ptp enable mpls ip ! no ip classless ip forward-protocol nd ip route 30.30.30.1 255.255.255.255 50.50.50.1 ! If anyone is familiar with CESoPSN configurations, please advise.

    Read the article

  • ConfigurationManager.AppSettings is empty?

    - by Mattousai
    Hello All, I have a VS2008 ASP.NET Web Service Application running on the local IIS of my XP machine. A separate project in the same solution uses test methods to invoke the WS calls, and run their processes. When I added a web reference to the WS App, VS2008 created a Settings.settings file in the Properties folder to store the address of the web reference. This process also created a new section in the Web.config file called applicationSettings to store the values from Settings.settings When my application attempts to retrieve configuration values from the appSettings section of the Web.config file, via ConfigurationManager.AppSettings[key], all values are null and AppSettings.AllKeys.Length is always zero. I even reverted the Web.config file to before the web reference was added, and made sure it was exactly the same as a system-generated web.config file for a new project that works fine. After comparing the reverted Web.config and a new Web.config, I addded one simple value in the appSettings section, and still no luck with ConfigurationManager.AppSettings[key]. Here is the reverted Web.config that cannot be read from <?xml version="1.0"?> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings> <add key="testkey" value="testvalue"/> </appSettings> <connectionStrings/> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="false"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Windows" /> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule" /> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory" /> <remove name="ScriptHandlerFactoryAppServices" /> <remove name="ScriptResource" /> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration> Has anyone experienced this, or know how to solve the problem? TIA -Matt

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >