Search Results

Search found 15630 results on 626 pages for 'variable variables'.

Page 255/626 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • How to have a shell script available everywhere I SSH to

    - by aib
    I have a shell script which I simply cannot do without: bar from Theiling Online I use SSH a lot and on a variety of *nix servers. However, I am not a system administrator and usually don't have the time or privileges to install it on every server I connect to. It is apparently a very portable sh script and has command line options to export itself as a shell function, which got me thinking: Could I use one of OpenSSH's subjectively obscure features to export it everywhere I go? My first thought was to assign the source to an environment variable like BAR = "cat -v" and then execute it on the other side as `$BAR`, but 1) I can't even get the cat example to to work locally, 2) I don't know how to put the script's actual multiline source into an environment variable and 3) I have yet to see a machine with PermitUserEnvironment enabled. I guess I could even do with an ssh option to write a file called ~/bar at logon, but a more volatile solution would be better. Calling wget http://.../bar at logon would be unacceptable. Any ideas? P.S. Putty-specific solutions, though I doubt any would exist, are also fine.

    Read the article

  • TLS_REQCERT and PHP with LDAPS

    - by John
    Problem: Secure LDAP queries via command-line and PHP to an AD domain controller with a self-signed certificate. Background: I am working on a project where I need to enable LDAP look-ups from a PHP web application to a MS AD domain controller that is using a self-signed certificate. This self-signed certificate is also using a domain name that is not a FQDN - think of something like people.campus as the domain name. The web application would take the user's credentials and pass them on to the AD domain controller to verify if the credntials are a match or not. This seems simple, but I am having problems trying to get PHP and the self-signed certificate to work. Some people have suggested that I changed the TLS_REQCERT variable from "request" to "never" within the OpenLDAP configuration. I am concerned that this might have larger implications such as a man-in-the-middle attack and I am not comfortable changing this setting to never. I have also read some places online where one can take a certificate and place it as a trusted source within the openldap configuration file. I am curious if that is something that I could do for the situation that I have? Can I, from the command line, obtain the self-signed certificate that the AD domain controller is using, save it to a file, and then have openldap use that file for the trust that it needs so that I do not need to adjust the variable from request to never? I do not have access to the AD domain controller and as a result cannot export the certificate. If there is a way to obtain the certificate from the command line, what commands do I need to use? Is there an alternate method of handling this issue that would be better in the long run? I have some CentOS servers and some Ubuntu servers that I am working with to try and get this going on. Thanks in advance for your help and ideas.

    Read the article

  • How do I get started with the M-Project is a Mobile HTML5 JavaScript Framework on Windows?

    - by Bruce Whealton
    This website for this great tool, call the M-Project says that I will need to add a doskey like this: doskey espresso=node C:\Path\To\Espresso\bin\espresso.js $1 $2 $3 $4 (It is a tool for creating Native mobile apps with the Phonegap/Cordova library, and it seems to be something that would be very helpful in this process). If I enter that at a command prompt in Windows 7 or 8, it's not going to stick around or persist. Is it an Environment Variable? Then it says at this page: http://www.the-m-project.org/ that it will work with Windows with some additional tools installed. The next line says that Node.js is needed, so I don't know if that is the additional tools mentioned above. Also, in an old discussion I read that one could just install cygwin. What would that do? It doesn't actually install any of the Linux distributions. I did install Ubuntu 12.04 server with VirtualBox because I thought it would be good to learn more about using Linux as I manage websites that are on a dedicated host. Anyway, the suggestion to install cygwin did not go into any details... I guess it would allow one to create a bash profile?? which would only work in a cygwin Command Line Window. Is that right? Isn't there a similar file that one could use in Windows or an Environment Variable that one could set to be able to achieve the same result? Thanks, Bruce

    Read the article

  • script to list user's mapped drive not giving results or error

    - by user223631
    We are in the process of migrating two file servers to a new server. We have mapped drives via user group in group policy. Many users have manually mapped drives and we need to find these mappings. I have created a PowerShell script to run that remotely get the drive mappings. It works on most computers but there are many that are not returning results and I am not getting any error messages. Each workstation on the list creates a text file and the ones that are not returning results have no text in the files. I can ping these machines. If the machine is not turned on, it does come up error message that the RPC server is not available. My domain user account is in a group that is in the local admin account. I have no idea why some are not working. Here is the script. # Load list into variable, which will become an array of strings If( !(Test-Path C:\Scripts)) { New-Item C:\Scripts -ItemType directory } If( !(Test-Path C:\Scripts\Computers)) { New-Item C:\Scripts\Computers -ItemType directory } If( !(Test-Path C:\Scripts\Workstations.txt)) { "No Workstations found. Please enter a list of Workstations under Workstation.txt"; Return} If( !(Test-Path C:\Scripts\KnownMaps.txt)) { "No Mapping to check against. Please enter a list of Known Mappings under KnownMaps.txt"; Return} $computerlist = Get-Content C:\Scripts\Workstations.txt # Loop through each item in the array (each computer in the list of computers we loaded into the variable) ForEach ($computer in $computerlist) { $diskObject = Get-WmiObject Win32_MappedLogicalDisk -computerName $computer | Select Name,ProviderName | Out-File C:\Tester\Computers\$computer.txt -width 200 } Select-String -Path C:\Tester\Computers\*.txt -Pattern cmsfiles | Out-File C:\Tester\Drivemaps-all.txt $strings = Get-Content C:\Tester\KnownMaps.txt Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -notmatch -simplematch | Out-File C:\Tester\Drivemaps-nonmatch.txt -Width 200 Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -simplematch | Out-File C:\Tester\Drivemaps-match.txt -Width 200

    Read the article

  • Apache and MySQL not working well after extending filesystem

    - by xtrimsky
    I had 4Gb on my /var (/dev/mapper/vg00-var) filesystem, and I wanted to extend it to 160Gb. I did it following this tutorial: http://faq.1and1.com/dedicated_servers/root_server/linux_admin_help/7.html Now I have 160: Filesystem Size Used Avail Use% Mounted on /dev/md1 4.0G 424M 3.6G 11% / /dev/mapper/vg00-usr 4.3G 1.4G 3.0G 32% /usr /dev/mapper/vg00-var 198G 6.5G 192G 4% /var /dev/mapper/vg00-home 4.3G 4.4M 4.3G 1% /home none 1.1G 0 1.1G 0% /tmp Now I have a problem, in order for Apache to work, each time I reboot, I need to also reboot apache: "apachectl -k restart" which is already terrible. I think this is because /var contains the htdocs The worst part is, mysql is not starting at all. Mysql has some files also in /var What have I done wrong ?? :( Thank you EDIT: Attaching /var/log/mysqld.log: 120602 11:17:44 InnoDB: Waiting for the background threads to start 120602 11:17:45 InnoDB: 1.1.8 started; log sequence number 8354009 120602 11:17:45 [ERROR] /usr/libexec/mysqld: unknown variable 'set-variable=local-infile=0' 120602 11:17:45 [ERROR] Aborting 120602 11:17:45 InnoDB: Starting shutdown... 120602 11:17:46 InnoDB: Shutdown completed; log sequence number 8354009 120602 11:17:46 [Note] /usr/libexec/mysqld: Shutdown complete 120602 11:17:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • Error at the end of APC install

    - by cinqoTimo
    I need to get APC running for a Drupal install of mine. I found a fairly concise guide at http://blog.4rev.net/2009-09/installing-apc-accelerator-into-php5-fedora-core-11/ for installing on FC11, only, I am using FC12. I figured I would give it a shot. I was able to run the following commands successfully - and yum installed fc12 versions of everything in the FC11 guide. yum install php-pear yum install php-devel httpd-devel yum groupinstall ‘Development Tools’ yum groupinstall ‘Development Libraries’ Then, I tried pecl install apc. Everything looked good until to got to the end, where it outputted the following error. /var/tmp/APC/php_apc.c: In function ‘zif_apc_compile_file’: /var/tmp/APC/php_apc.c:881: warning: unused variable ‘eg_class_table’ /var/tmp/APC/php_apc.c:881: warning: unused variable ‘eg_function_table’ /var/tmp/APC/php_apc.c: At top level: /var/tmp/APC/php_apc.c:959: error: duplicate ‘static’ make: *** [php_apc.lo] Error 1 ERROR: `make' failed Some people have had success with installing apc-beta, but that didn't work for me.. Any suggestions? Is there something I missed that is critical in the FC12 version?

    Read the article

  • How do I install gfortran (via cygwin and etexteditor) and enable ifort under Windows XP?

    - by bez
    I'm a newbie in the Unix world so all this is a little confusing to me. I'm having trouble compiling some Fortran files under Cygwin on Windows XP. Here's what I've done so far: Installed the e text editor. Installed Cygwin via the "automatic" option inside e text editor. I need to compile some Fortran files so via the "manage bundles" option I installed the Fortran bundle as well. However, when I select "compile single file" I get an error saying gfortran was missing, and then that I need to set the TM_FORTRAN variable to the full path of my compiler. I tried opening a Cygwin bash shell at the path mentioned (.../bin/gfortran), but the compiler was nowhere to be found. Can someone tell me how to install this from the Cygwin command line? Where do I need to update the TM_FORTRAN variable for the bundle to work? Also, how do I change the bundle "compile" option to work with ifort (my native compiler) on Windows? I've read the bundle file, but it is totally incomprehensible to me. Ifort is a Windows compiler, invoked simply by ifort filename.f90, since it is on the Windows path. I know this is a lot to ask of a first time user here, but I really would appreciate any time you can spare to help.

    Read the article

  • Excel 2010 - more than 1 calculation within an IF() statement

    - by Da Bajan
    I have a situation where I need to calculate shipping values based on the length of the supply chain. Easy, however I need to have instances where an increased amount is required based on specific date criteria. My example is as follows: Shipvalue = 100 Date1 = 1/1/2013 (Jan) - ship 50% more than usual Date2 = 2/1/2013 (Feb) - ship 25% more than usual Date3 = 3/1/2013 (Mar) - ship 25% more than usual Supply chain length is: June - October 100 days November - March 140 days April - June 100 days The issue I have is that as there is an increase in the number of days, my formula: IF( Date1-(Supply chain length + any extra days)=today's date, shipvalue+(shipvalue X 50%), IF( Date2-(Supply chain length + any extra days)=today's date, shipvalue+(shipvalue x 50%) IF( Date2-(Supply chain length + any extra days)=today's date, shipvalue+(shipvalue x 50%), IF( preceding cell<>0,shipvalue, 0) ) ) ) Now the problem with this is that if the length of the supply chain increases then the formula misses all but the 1st increase. So, I thought of adding a variable that would be incremented and checked every time you made an increased shipping amount. So, how do I do both the calculation for the increased shipping value, and set the variable in one part of the IF statement?

    Read the article

  • On automating a split-mirror ASM backup with EMC TimeFinder ...

    - by [email protected]
    Normal 0 21 false false false MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Hi clerks,   Offloading the backup operation to another host using disk cloning could really improve the performance on highly busy databases ( 24x7, zero downtime and all this stuff ...) There are well know white papers on this subject, ASM included, but today Im showing you a nice way to automate the procedure using shell scripting with EMC TimeFinder technologies:   Assumptions: *********** ASM diskgroups name:   +data_${db_name} : asm data diskgroup +fra_${db_name} :  asm fra  diskgroup   EMC Time Finder sync groups name:   rac_${DB_NAME}_data_tf : data group rac_${DB_NAME}_fra_tf:   fra group     There are two scripts, one located on the production box ( bck_database.sh ) and the other one on the backup server node ( bck_database_mirror.sh ) The second one is remotly executed from the production host There are a bunch of variables along the code with selfexplanatory names I guess, anyway let me know if you want some help     #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    NAME ###     bck_database.sh ### ###    DESCRIPTION ###     Database backup on third mirror ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###    Oracle            28/01/10             - Creacion ###   V_DATE=`/bin/date +%Y%m%d_%H%M%S` V_FICH_LOG=`dirname $0`/trace_dir_location/`basename $0`.${V_DATE}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1     ADMIN_DIR=`dirname $0` . ${ADMIN_DIR}/setenv_instance.sh -- This script should set the instance vars like Oracle Home, Sid, db_name ... if [ $? -ne 0 ] then   echo "Error when setting the environment."   exit 1 fi   echo "${V_DATE} ####################################################" echo "Executing database backup: ${DB_NAME}" echo "####################################################################"   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm data diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm data diskgroups"   exit 2 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm data disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm data diskgroups"   exit 3 fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm fra diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm fra diskgroups"   exit 4 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm fra disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm fra diskgroups"   exit 5 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "ASM sync sucessfully completed!" echo "####################################################################"     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to BEGIN BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter system archive log current;   alter database ${DB_NAME} begin backup; ! if [ $? -ne 0 ] then   echo "Error when updating database status to BEGIN backup"   exit 6 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm data disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm data disks"   exit 7 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to END BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter database ${DB_NAME} end backup;   alter system archive log current; ! if [ $? -ne 0 ] then   echo "Error when updating database status to END backup"   exit 8 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Generating controlfile copies...." echo "####################################################################" rman<<-! connect target / run { allocate channel ch1 type DISK; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_mount.ctl'; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl'; } ! if [ $? -ne 0 ] then   echo "Error generating controlfile copies"   exit 9 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Resync RMAN catalog ....." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} resync catalog; ! if [ $? -ne 0 ] then   echo "Error when resyncing RMAN catalog"   exit 10 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm fra disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm fra disks"   exit 11 fi     echo "WARNING!: Calling bck_database_mirror.sh host ${NODE_BCK_SERVER}..." ssh ${NODO_BCK_SERVER} ${ADMIN_DIR_BCK}/bck_database_mirror.sh if [ $? -ne 0 ] then   echo "Error, when remote executing the backup "   exit 12 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Cleaning the archived redo logs already copied to tape ..." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} run { resync catalog; delete noprompt archivelog all backed up 1 times to device type sbt; } ! if [ $? -ne 0 ] then   echo "Error when cleaning the archived redo logs"   exit 13 fi echo "${V_DATE} ####################################################" echo "Backup sucessfully executed!!" echo "####################################################################" exit 0   ------------------------------------------------------------------------------ ------------------------** BACKUP SERVER NODE ** ----------------------------- ------------------------------------------------------------------------------   #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    ###    NAME ###     bck_database_mirror.sh ### ###    DESCRIPTION ###      Backup @ backup server ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###      Oracle                    28/01/10     - Creacion         V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ASM instance ..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_asm.sh -- This script is supposed to start the ASM instance in the backup server   if [ $? -ne 0 ]   then     echo "Error when tying to start ASM instance."     exit 1   fi       . ${V_ADMIN_DIR}/setenv_asm.sh -- This script is supposed to set the env. variables of the ASM instance   if [ $? -ne 0 ]   then     echo "Error when setting the ASM environment"     exit 1   fi       V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "The asm diskgroups/disks dettected are the following ..."   echo "####################################################################"     sqlplus /nolog <<-!     whenever sqlerror exit 1     connect / as sysdba     whenever sqlerror exit     SET LINES 200     COL PATH FORMAT A25     SELECT DISK.MOUNT_STATUS, DISK.PATH, DISK.NAME, DISK_GROUP.NAME, DISK_GROUP.TOTAL_MB FROM V\$ASM_DISK DISK, V\$ASM_DISKGROUP DISK_GROUP WHERE DISK.GROUP_NUMBER=DISK_GROUP.GROUP_NUMBER; !       V_ADMIN_DIR=`dirname $0`   . ${V_ADMIN_DIR}/setenv_instance.sh -- This script is supposed to set the env. variables of the database instance   if [ $? -ne 0 ]   then     echo "Error when setting the database instance environment"     exit 1   fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ${DB_NAME} in MOUNT mode..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_instance_mount.sh -- This script is supposed to do a startup mount   if [ $? -ne 0 ]   then     echo "Error starting  ${DB_NAME} in MOUNT mode"     exit 1   fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Executing RMAN backup..."   echo "####################################################################"   rman<<-!   connect target /   connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG}   run {   allocate channel ch1 type 'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; -- TDPO Media Library   crosscheck archivelog all;   backup tag BCK_CONTROLFILE_ST_${DB_NAME}   format 'ctl_%d_%s__%p_%t'   controlfilecopy '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl';   backup tag BCK_DATAFILE_ST_${DB_NAME} full   format 'db_%d_%s_%p_%t'database;   backup tag BCK_ARCHLOG_ST_${DB_NAME} format 'al_%d_%s_%p_%t' archivelog all;   release channel ch1;   } !   if [ $? -ne 0 ]   then     echo "Error executing the RMAN backup"     exit 1   fi     ${V_ADMIN_DIR}/stop_instance_immediate.sh -- This script is supposed to do a shutdown immediate of the database instance   ${ADMIN_DIR}/stop_asm_immediate.sh -- This script is supposed to do a shutdown immediate of the ASM instance   exit 0     fi   Hope it helps someone! --L

    Read the article

  • Running SSIS packages from C#

    - by Piotr Rodak
    Most of the developers and DBAs know about two ways of deploying packages: You can deploy them to database server and run them using SQL Server Agent job or you can deploy the packages to file system and run them using dtexec.exe utility. Both approaches have their pros and cons. However I would like to show you that there is a third way (sort of) that is often overlooked, and it can give you capabilities the ‘traditional’ approaches can’t. I have been working for a few years with applications that run packages from host applications that are implemented in .NET. As you know, SSIS provides programming model that you can use to implement more flexible solutions. SSIS applications are usually thought to be batch oriented, with fairly rigid architecture and processing model, with fixed timeframes when the packages are executed to process data. It doesn’t to be the case, you don’t have to limit yourself to batch oriented architecture. I have very good experiences with service oriented architectures processing large amounts of data. These applications are more complex than what I would like to show here, but the principle stays the same: you can execute packages as a service, on ad-hoc basis. You can also implement and schedule various signals, HTTP calls, file drops, time schedules, Tibco messages and other to run the packages. You can implement event handler that will trigger execution of SSIS when a certain event occurs in StreamInsight stream. This post is just a small example of how you can use the API and other features to create a service that can run SSIS packages on demand. I thought it might be a good idea to implement a restful service that would listen to requests and execute appropriate actions. As it turns out, it is trivial in C#. The application is implemented as console application for the ease of debugging and running. In reality, you might want to implement the application as Windows service. To begin, you have to reference namespace System.ServiceModel.Web and then add a few lines of code: Uri baseAddress = new Uri("http://localhost:8011/");               WebServiceHost svcHost = new WebServiceHost(typeof(PackRunner), baseAddress);                           try             {                 svcHost.Open();                   Console.WriteLine("Service is running");                 Console.WriteLine("Press enter to stop the service.");                 Console.ReadLine();                   svcHost.Close();             }             catch (CommunicationException cex)             {                 Console.WriteLine("An exception occurred: {0}", cex.Message);                 svcHost.Abort();             } The interesting lines are 3, 7 and 13. In line 3 you create a WebServiceHost object. In line 7 you start listening on the defined URL and then in line 13 you shut down the service. As you have noticed, the WebServiceHost constructor is accepting type of an object (here: PackRunner) that will be instantiated as singleton and subsequently used to process the requests. This is the class where you put your logic, but to tell WebServiceHost how to use it, the class must implement an interface which declares methods to be used by the host. The interface itself must be ornamented with attribute ServiceContract. [ServiceContract]     public interface IPackRunner     {         [OperationContract]         [WebGet(UriTemplate = "runpack?package={name}")]         string RunPackage1(string name);           [OperationContract]         [WebGet(UriTemplate = "runpackwithparams?package={name}&rows={rows}")]         string RunPackage2(string name, int rows);     } Each method that is going to be used by WebServiceHost has to have attribute OperationContract, as well as WebGet or WebInvoke attribute. The detailed discussion of the available options is outside of scope of this post. I also recommend using more descriptive names to methods . Then, you have to provide the implementation of the interface: public class PackRunner : IPackRunner     {         ... There are two methods defined in this class. I think that since the full code is attached to the post, I will show only the more interesting method, the RunPackage2.   /// <summary> /// Runs package and sets some of its variables. /// </summary> /// <param name="name">Name of the package</param> /// <param name="rows">Number of rows to export</param> /// <returns></returns> public string RunPackage2(string name, int rows) {     try     {         string pkgLocation = ConfigurationManager.AppSettings["PackagePath"];           pkgLocation = Path.Combine(pkgLocation, name.Replace("\"", ""));           Console.WriteLine();         Console.WriteLine("Calling package {0} with parameter {1}.", name, rows);                  Application app = new Application();         Package pkg = app.LoadPackage(pkgLocation, null);           pkg.Variables["User::ExportRows"].Value = rows;         DTSExecResult pkgResults = pkg.Execute();         Console.WriteLine();         Console.WriteLine(pkgResults.ToString());         if (pkgResults == DTSExecResult.Failure)         {             Console.WriteLine();             Console.WriteLine("Errors occured during execution of the package:");             foreach (DtsError er in pkg.Errors)                 Console.WriteLine("{0}: {1}", er.ErrorCode, er.Description);             Console.WriteLine();             return "Errors occured during execution. Contact your support.";         }                  Console.WriteLine();         Console.WriteLine();         return "OK";     }     catch (Exception ex)     {         Console.WriteLine(ex);         return ex.ToString();     } }   The method accepts package name and number of rows to export. The packages are deployed to the file system. The path to the packages is configured in the application configuration file. This way, you can implement multiple services on the same machine, provided you also configure the URL for each instance appropriately. To run a package, you have to reference Microsoft.SqlServer.Dts.Runtime namespace. This namespace is implemented in Microsoft.SQLServer.ManagedDTS.dll which in my case was installed in the folder “C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies”. Once you have done it, you can create an instance of Microsoft.SqlServer.Dts.Runtime.Application as in line 18 in the above snippet. It may be a good idea to create the Application object in the constructor of the PackRunner class, to avoid necessity of recreating it each time the service is invoked. Then, in line 19 you see that an instance of Microsoft.SqlServer.Dts.Runtime.Package is created. The method LoadPackage in its simplest form just takes package file name as the first parameter. Before you run the package, you can set its variables to certain values. This is a great way of configuring your packages without all the hassle with dtsConfig files. In the above code sample, variable “User:ExportRows” is set to value of the parameter “rows” of the method. Eventually, you execute the package. The method doesn’t throw exceptions, you have to test the result of execution yourself. If the execution wasn’t successful, you can examine collection of errors exposed by the package. These are the familiar errors you often see during development and debugging of the package. I you run the package from the code, you have opportunity to persist them or log them using your favourite logging framework. The package itself is very simple; it connects to my AdventureWorks database and saves number of rows specified in variable “User::ExportRows” to a file. You should know that before you run the package, you can change its connection strings, logging, events and many more. I attach solution with the test service, as well as a project with two test packages. To test the service, you have to run it and wait for the message saying that the host is started. Then, just type (or copy and paste) the below command to your browser. http://localhost:8011/runpackwithparams?package=%22ExportEmployees.dtsx%22&rows=12 When everything works fine, and you modified the package to point to your AdventureWorks database, you should see "OK” wrapped in xml: I stopped the database service to simulate invalid connection string situation. The output of the request is different now: And the service console window shows more information: As you see, implementing service oriented ETL framework is not a very difficult task. You have ability to configure the packages before you run them, you can implement logging that is consistent with the rest of your system. In application I have worked with we also have resource monitoring and execution control. We don’t allow to run more than certain number of packages to run simultaneously. This ensures we don’t strain the server and we use memory and CPUs efficiently. The attached zip file contains two projects. One is the package runner. It has to be executed with administrative privileges as it registers HTTP namespace. The other project contains two simple packages. This is really a cool thing, you should check it out!

    Read the article

  • Oracle HRMS API – Update Employee

    - by PRajkumar
    API - hr_person_api.update_person Example --   Before Firing Update API -- Middle Name and Status is NULL lets update Middle Name and Status     DECLARE     -- Local Variables     -- -----------------------     ln_object_version_number       PER_ALL_PEOPLE_F.OBJECT_VERSION_NUMBER%TYPE  := 7;      lc_dt_ud_mode                            VARCHAR2(100)                                                                                     := NULL;      ln_assignment_id                       PER_ALL_ASSIGNMENTS_F.ASSIGNMENT_ID%TYPE          := 33564;      lc_employee_number                 PER_ALL_PEOPLE_F.EMPLOYEE_NUMBER%TYPE               := 'PRAJ_01';        -- Out Variables for Find Date Track Mode API     -- ----------------------------------------------------------------     lb_correction                                  BOOLEAN;      lb_update                                        BOOLEAN;      lb_update_override                      BOOLEAN;       lb_update_change_insert           BOOLEAN;    -- Out Variables for Update Employee API     -- -----------------------------------------------------------      ld_effective_start_date                       DATE;      ld_effective_end_date                        DATE;      lc_full_name                                         PER_ALL_PEOPLE_F.FULL_NAME%TYPE;      ln_comment_id                                    PER_ALL_PEOPLE_F.COMMENT_ID%TYPE;       lb_name_combination_warning    BOOLEAN;      lb_assign_payroll_warning             BOOLEAN;      lb_orig_hire_warning                        BOOLEAN; BEGIN     -- Find Date Track Mode     -- --------------------------------      dt_api.find_dt_upd_modes      (    -- Input Data Elements           -- ------------------------------           p_effective_date                           => TO_DATE('29-JUN-2011'),           p_base_table_name                    => 'PER_ALL_ASSIGNMENTS_F',           p_base_key_column                   => 'ASSIGNMENT_ID',           p_base_key_value                       => ln_assignment_id,           -- Output data elements           -- -------------------------------          p_correction                                   => lb_correction,          p_update                                         => lb_update,          p_update_override                       => lb_update_override,          p_update_change_insert            => lb_update_change_insert    );      IF ( lb_update_override = TRUE OR lb_update_change_insert = TRUE )    THEN           -- UPDATE_OVERRIDE           -- ---------------------------------           lc_dt_ud_mode := 'UPDATE_OVERRIDE';    END IF;    IF ( lb_correction = TRUE )    THEN          -- CORRECTION          -- ----------------------          lc_dt_ud_mode := 'CORRECTION';    END IF;    IF ( lb_update = TRUE )    THEN         -- UPDATE         -- --------------          lc_dt_ud_mode := 'UPDATE';    END IF;       -- Update Employee API     -- ---------------------------------       hr_person_api.update_person     (       -- Input Data Elements             -- ------------------------------             p_effective_date                              => TO_DATE('29-JUN-2011'),             p_datetrack_update_mode         => lc_dt_ud_mode,             p_person_id                                     => 32979,             p_middle_names                            => 'TEST',             p_marital_status                             => 'M',             -- Output Data Elements             -- ----------------------------------            p_employee_number                       => lc_employee_number,            p_object_version_number              => ln_object_version_number,            p_effective_start_date                      => ld_effective_start_date,            p_effective_end_date                       => ld_effective_end_date,            p_full_name                                       => lc_full_name,            p_comment_id                                   => ln_comment_id,            p_name_combination_warning   => lb_name_combination_warning,            p_assign_payroll_warning           => lb_assign_payroll_warning,            p_orig_hire_warning                      => lb_orig_hire_warning     );      COMMIT; EXCEPTION        WHEN OTHERS THEN                    ROLLBACK;                    dbms_output.put_line(SQLERRM); END; / SHOW ERR;   After Firing Update Employee API -- Middle Name and Status  

    Read the article

  • CodePlex Daily Summary for Thursday, June 23, 2011

    CodePlex Daily Summary for Thursday, June 23, 2011Popular ReleasesMiniTwitter: 1.70: MiniTwitter 1.70 ???? ?? ????? xAuth ?? OAuth ??????? 1.70 ??????????????????????????。 ???????????????? Twitter ? Web ??????????、PIN ????????????????????。??????????????????、???????????????????????????。Total Commander SkyDrive File System Plugin (.wfx): Total Commander SkyDrive File System Plugin 0.8.7b: Total Commander SkyDrive File System Plugin version 0.8.7b. Bug fixes: - BROKEN PLUGIN by upgrading SkyDriveServiceClient version 2.0.1b. Please do not forget to express your opinion of the plugin by rating it! Donate (EUR)SkyDrive .Net API Client: SkyDrive .Net API Client 2.0.1b (RELOADED): SkyDrive .Net API Client assembly has been RELOADED in version 2.0.1b as a REAL API. It supports the followings: - Creating root and sub folders - Uploading and downloading files - Renaming and deleting folders and files Bug fixes: - BROKEN API (issue 6834) Please do not forget to express your opinion of the assembly by rating it! Donate (EUR)Mini SQL Query: Mini SQL Query v1.0.0.59794: This release includes the following enhancements: Added a Most Recently Used file list Added Row counts to the query (per tab) and table view windows Added the Command Timeout option, only valid for MSSQL for now - see options If you have no idea what this thing is make sure you check out http://pksoftware.net/MiniSqlQuery/Help/MiniSqlQueryQuickStart.docx for an introduction. PK :-]HydroDesktop - CUAHSI Hydrologic Information System Desktop Application: 1.2.591 Beta Release: 1.2.591 Beta ReleaseJavaScript Web Resource Manager for Microsoft Dynamics CRM 2011: JavaScript Web Resource Manager (v1.0.521.54): Initial releasepatterns & practices: Project Silk: Project Silk Community Drop 12 - June 22, 2011: Changes from previous drop: Minor code changes. New "Introduction" chapter. New "Modularity" chapter. Updated "Architecture" chapter. Updated "Server-Side Implementation" chapter. Updated "Client Data Management and Caching" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To ins...MapWinGIS ActiveX Map and GIS Component: Second Release Candidate for v4.8: This is the second release candidate for v4.8. This is the ocx installer only, with the new dependancies like GDAL v1.8, GEOS v3.3 and updated libraries for MrSid and ECW.Epinova.CRMFramework: Epinova.CRMFramework 0.5: Beta Release Candidate. Issues solved in this release: http://crmframework.codeplex.com/workitem/593SQL Server HowTo: Version 1.0: Initial ReleaseDropBox Linker: DropBox Linker 1.3: Added "Get links..." dialog, that provides selective public files links copying Get links link added to tray menu as the default option Fixed URL encoding .NET Framework 4.0 Client Profile requiredDotNetNuke® Community Edition: 06.00.00 Beta: Beta 1 (Build 2300) includes many important enhancements to the user experience. The control panel has been updated for easier access to the most important features and additional forms have been adapted to the new pattern. This release also includes many bug fixes that make it more stable than previous CTP releases. Beta ForumsTerraria World Viewer: Version 1.4: Update June 21st World file will be stored in memory to minimize chances of Terraria writing to it while we read it. Different set of APIs allow the program to draw the world much quicker. Loading world information (world variables, chest list) won't cause the GUI to freeze at all anymore. Re-introduced the "Filter chests" checkbox: Allow disabling of chest filter/finder so all chest symbos are always drawn. First-time users will have a default world path suggested to them: C:\Users\U...BlogEngine.NET: BlogEngine.NET 2.5 RC: BlogEngine.NET Hosting - Click Here! 3 Months FREE – BlogEngine.NET Hosting – Click Here! This is a Release Candidate version for BlogEngine.NET 2.5. The most current, stable version of BlogEngine.NET is version 2.0. Find out more about the BlogEngine.NET 2.5 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation. If you are upgrading from a previous version, please take a look at ...Microsoft All-In-One Code Framework - a centralized code sample library: All-In-One Code Framework 2011-06-19: Alternatively, you can install Sample Browser or Sample Browser VS extension, and download the code samples from Sample Browser. Improved and Newly Added Examples:For an up-to-date code sample index, please refer to All-In-One Code Framework Sample Catalog. NEW Samples for Windows Azure Sample Description Owner CSAzureStartupTask The sample demonstrates using the startup tasks to install the prerequisites or to modify configuration settings for your environment in Windows Azure Rafe Wu ...IronPython: 2.7.1 Beta 1: This is the first beta release of IronPython 2.7. Like IronPython 54498, this release requires .NET 4 or Silverlight 4. This release will replace any existing IronPython installation. The highlights of this release are: Updated the standard library to match CPython 2.7.2. Add the ast, csv, and unicodedata modules. Fixed several bugs. IronPython Tools for Visual Studio are disabled by default. See http://pytools.codeplex.com for the next generation of Python Visual Studio support. See...Facebook C# SDK: 5.0.40: This is a RTW release which adds new features to v5.0.26 RTW. Support for multiple FacebookMediaObjects in one request. Allow FacebookMediaObjects in batch requests. Removes support for Cassini WebServer (visual studio inbuilt web server). Better support for unit testing and mocking. updated SimpleJson to v0.6 Refer to CHANGES.txt for details. For more information about this release see the following blog posts: Facebook C# SDK - Multiple file uploads in Batch Requests Faceb...Candescent NUI: Candescent NUI (7774): This is the binary version of the source code in change set 7774.EffectControls-Silverlight controls with animation effects: EffectControls: EffectControlsNLog - Advanced .NET Logging: NLog 2.0 Release Candidate: Release notes for NLog 2.0 RC can be found at http://nlog-project.org/nlog-2-rc-release-notesNew ProjectsA SharePoint Client Object Model Demo in Silverlight: This demo shows how to utilize the SharePoint 2010 Foundation's Client Object Model within a Silverlight application.Azure SDK Extentions (ja): Azure SDK Tools ????????VS??????????????????。 ??????????????????、????????????????。Bango Apple iOS Application Analytics SDK: Bango application analytics is an analytics solution for mobile applications. This SDK provides a framework you can use in your application to add analytics capabilities to your mobile applications. It's developed in Object C and targets the Apple iOS operating system. Binsembler: Binsembler allows assembler programmers to easily import files into their source code, so that they'll no longer have to use external flash memory just for a few bytes of data. The project is developed in C#.DataTool: Liver.DTEMICHAG: This project contains the .NET Microframework solution for the EMIC Home Automation Gateway (EMICHAG). This project created within the co-funded research project called HOMEPLANE (http://homeplane.org/). html5_stady: html5 DemoJavaScript Web Resource Manager for Microsoft Dynamics CRM 2011: JavaScript Web Resource Manager for Microsoft Dynamics CRM 2011 helps CRM developers to extract javascript web resources to disk, maintain them and import changes back to CRM database. You will no longer have to perform mutliple clicks operation to update you js web resourcesKiefer SharePoint 2010 Templates: The Kiefer Consulting SharePoint 2010 Site Templates allow you to quickly create SharePoint sites pre-configured to meet specific business needs. These sites use only out of the box SharePoint Foundation 2010 features and will run on SharePoint Server 2010 and Office 365.KxFrame: KxFrame is intended to be free, easy-to-use, rapid business application framework. How many times were you starting to develop application from scratch? And every time you say to yourself: "Just this time, next time I'll make some framework"? It's over now! Lets make business application framework together and enjoy creating applications faster than ever.Made In Hell: Small console game. Just a training in code writings.Merula SmartPages: When you want to remember your data in an ASP.NET site after a postback, you will have to use Session or ViewState to remember your data. It’s a lot work to use Session and ViewStates, I just want to declare variables like in a desktop application. So I created a library called Merula SmartPages. This library will remember the properties and variables of your page. MVC ASPX to Razor View Converter: So this project will convert your MVC ASPX pages you worked so hard to make into cshtml files. You simply drop the exe into the folder and it works. I hope you enjoy, thanks! Netduino Utilities: Netduino classes I use with various odds of hardware I have. Hope it's useful ;-) NetEssentialsBugtrackerProject: Bug tracker for telerikPin My Site: Site to help demonstrate pinning capabilities of IE9Python3 ????: Python3 ????RDI Open CTe: RDI Open CTeRDI Open SPED (Contábil): RDI Open SPED (Contábil)RDI Open SPED (Fiscal): RDI Open SPED (Fiscal)sbc: sbcSimply Helpdesk UserControl: Simply Helpdesk is a user control that allows people to embed a helpdesk directly into their active projects. It is design to be as simple as possible, Minimal configuration is required to get it up and running.SQL Server HowTo: Short T-SQL scripts for difficult tasks from everyday job of programmers and database administrators. Also contains links to external resources with such useful information.Stashy: Stash data simply. Stashy is a Micro-NoSql framework. Stashy is a tiny interface, plus four reference implementations, for adding data storage and retrieval to all your .net types. Drop a single stashy class into your application then you can save and load the lot. storeBags01: hacer pedido de bolsos maletas y morrales compra de bolsos y maletas storebags UMTS Net: Program optimizes deployment of UMTS devicesvAddins - A little different addins framework: vAddins transforms C# and VB into powerful scripting languages for making addins or scripts. With this, you no longer need to open Visual Studio or other IDEs, reference assemblies, write the code and then built, move and test... Now you only have to write the code and test!Visual Studio Tags: A Visual Studio extension for tagging your source code files, and then explore your files using the tags.Window Phone Private Media Library: Windows Phone application to protect your media files from unauthorized eyes using your phone.Windows Touch/Mouse/TUIO Multi-touch Gesture recognizer & Trainer: This projects based on Single point Gesture recognizer paper of http://faculty.washington.edu/wobbrock/pubs/uist-07.1.pdf of unistrokes alphabets. The project expand capabilities to allow multi-touch gestures. C# library for rapid use. Mono Client. And C++ Client.WPF, Silverlight PDF viewer: WPF, Silverlight PDF viewer

    Read the article

  • CodePlex Daily Summary for Wednesday, June 20, 2012

    CodePlex Daily Summary for Wednesday, June 20, 2012Popular ReleasesApex: Apex 1.4: Apex 1.4Apex 1.4 provides a framework for rapid MVVM development. Download Apex 1.4 to get the core binaries, Visual Studio Extensions, Project Templates, Samples and Documentation. The 1.4 Release provides a vast number of enhancements via the Apex Broker. The Apex Broker is an object that can be used to retrieve models, get the view for a view model and more, much like an IoC container. The new Zune Style application templates for WPF and Silverlight give a great starting point for makin...Auto Proxy Configuration: V 1.0: This tool consists of a windows application that allows you to define a proxy server for each DNSDomain and a windows service that matches the DNSDomain to the database and set the proxy accordingly.51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.6.11: One Click Install from NuGet Changes to Version 2.1.6.111. Altered the GetIsCrawler method in MobileCapabilities.cs to return null if no value for IsCrawler is provided. Previously the method returned false breaking values provided via the BrowserCap file. 2. Enhanced Xml/Reader.cs to reduce the number of byte objects created when decompressing zipped XML files. 3. Altered Location.cs GetUrl method to remove replacement string tags {0} from the new url if no values were found to replace them...NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.3 - VS2010 + VS2012: This is a small maintenance release to support new VS2012 as well as VS2010. This release is also fixing the issue The "Comment Selection" include the first line after the selection If the new NShader version doesn't highlight your shader, you can try to: Remove the registry entry: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\FontAndColors\Cache Remove all lines using "fx" or "hlsl" in file C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Micr...asp.net membership: v1.0 Membership Management: Abmemunit is a Membership Management Project where developer can use their learning purposes of various issues related to ASP.NET, MVC, ASP.NET membership, Entity Framework, Code First, Ajax, Unit Test, IOC Container, Repository, and Service etc. Though it is a very simple project and all of these topics are used in an easy manner gathering from various big projects, it can easily understand. User End Functional Specification The functionalities of this project are very simple and straight...JSON Toolkit: JSON Toolkit 4.0: Up to 2.5x performance improvement in stringify operations Up to 1.7x performance improvement in parse operations Improved error messages when parsing invalid JSON strings Extended support to .Net 2.0, .Net 3.5, .Net 4.0, Silverlight 4, Windows Phone, Windows 8 metro apps and Xbox JSON namespace changed to ComputerBeacon.Json namespaceXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0: System Requirements OS Windows 7 Windows Vista Windows Server 2008 Windows Server 2008 R2 Web Server Internet Information Service 7.0 or above .NET Framework .NET Framework 4.0 WCF Activation feature HTTP Activation Non-HTTP Activation for net.pipe/net.tcp WCF bindings ASP.NET MVC ASP.NET MVC 3.0 Database Microsoft SQL Server 2005 Microsoft SQL Server 2008 Microsoft SQL Server 2008 R2 Additional Deployment Configuration Started Windows Process Activation service Start...ASP.NET REST Services Framework: Release 1.3 - Standard version: The REST-services Framework v1.3 has important functional changes allowing to use complex data types as service call parameters. Such can be mapped to form or query string variables or the HTTP Message Body. This is especially useful when REST-style service URLs with POST or PUT HTTP method is used. Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when buildin...NanoMVVM: a lightweight wpf MVVM framework: v0.10 stable beta: v0.10 Minor fixes to ui and code, added error example to async commands, separated project into various releases (mainly into logical wholes), removed expression blend satellite assembliesCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.1: Added screenshot support that takes screenshot of user's desktop on application crash and provides option to include screenshot with crash report. Added Windows version in crash reports. Added email field and exception type field in crash report dialog. Added exception type in crash reports. Added screenshot tab that shows crash screenshot.MFCMAPI: June 2012 Release: Build: 15.0.0.1034 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeMonoGame - Write Once, Play Everywhere: MonoGame 2.5.1: Release Notes The MonoGame team are pleased to announce that MonoGame v2.5.1 has been released. This release contains important bug fixes and minor updates. Recent additions include project templates for iOS and MacOS. The MonoDevelop.MonoGame AddIn also works on Linux. We have removed the dependency on the thirdparty GamePad library to allow MonoGame to be included in the debian/ubuntu repositories. There have been a major bug fix to ensure textures are disposed of correctly as well as some ...????: ????2.0.2: 1、???????????。 2、DJ???????10?,?????????10?。 3、??.NET 4.5(Windows 8)????????????。 4、???????????。 5、??????????????。 6、???Windows 8????。 7、?????2.0.1???????????????。 8、??DJ?????????。Azure Storage Explorer: Azure Storage Explorer 5 Preview 1 (6.17.2012): Azure Storage Explorer verison 5 is in development, and Preview 1 provides an early look at the new user interface and some of the new features. Here's what's new in v5 Preview 1: New UI, similar to the new Windows Azure HTML5 portal Support for configuring and viewing storage account logging Support for configuring and viewing storage account monitoring Uses the Windows Azure 1.7 SDK libraries Bug fixesCodename 'Chrometro': Developer Preview: Welcome to the Codename 'Chrometro' Developer Preview! This is the very first public preview of the app. Please note that this is a highly primitive build and the app is not even half of what it is meant to be. The Developer Preview sports the following: 1) An easy to use application setup. 2) The Assistant which simplifies your task of customization. 3) The partially complete Metro UI. 4) A variety of settings 5) A partially complete web browsing experience To get started, download the Ins...Cosmos (C# Open Source Managed Operating System): Release 92560: Prerequisites Visual Studio 2010 - Any version including Express. Express users must also install Visual Studio 2010 Integrated Shell runtime VMWare - Cosmos can run on real hardware as well as other virtualization environments but our default debug setup is configured for VMWare. VMWare Player (Free). or Workstation VMWare VIX API 1.11AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes New feature added that allows user to select remind later interval.Microsoft SQL Server Product Samples: Database: AdventureWorks 2008 OLTP Script: Install AdventureWorks2008 OLTP database from script The AdventureWorks database can be created by running the instawdb.sql DDL script contained in the AdventureWorks 2008 OLTP Script.zip file. The instawdb.sql script depends on two path environment variables: SqlSamplesDatabasePath and SqlSamplesSourceDataPath. The SqlSamplesDatabasePath environment variable is set to the default Microsoft ® SQL Server 2008 path. You will need to change the SqlSamplesSourceDataPath environment variable to th...WipeTouch, a jQuery touch plugin: 1.2.0: Changes since 1.1.0: New: wipeMove event, triggered while moving the mouse/finger. New: added "source" to the result object. Bug fix: sometimes vertical wipe events would not trigger correctly. Bug fix: improved tapToClick handler. General code refactoring. Windows Phone 7 is not supported, yet! Its behaviour is completely broken and would require some special tricks to make it work. Maybe in the future...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3026 (June 2012): Fixes: round( 0.0 ) local TimeZone name TimeZone search compiling multi-script-assemblies PhpString serialization DocDocument::loadHTMLFile() token_get_all() parse_url()New ProjectsAMPlebrot: Sample code for Microsoft AMP including a auto-zooming Mandlebrot set browser and various random number generator implementations.Aquasoft ISZR web services Proxy: Knihovna pro pripojení k Informacnímu Systému Základních Registru (ISZR).Article Authoring Add-in for Word (NLM JATS): Use Microsoft Word to create, edit, save, and upload journal articles in the NLM Journal Article Tag Suite (JATS) DTDBetter Calculator: Scientific calculator with support for user variables and full expression evaluation.Cosmos Image Converter (CIC) GUI: CIC Gui is a program developed for COSMOS (http://cosmos.codeplex.com/) that processes an image and creates code compatible with COSMOS for you.CSNN: ?????? ?????? ????????? ???????????? ???????????? ????????? ????, ??????????? ????????? ????????? ? ??????????? ???????? ?? ????????.Cuddy Chat Server Client: Small Chat Server & Client - basierend auf C# mit SocketsdemoASP: asp.net mvcEspera: Espera is a portable music player, specialized for partys. It is written in C# with WPF as frontend technology.ExamEvaluator: Utility to create excelspreadsheets that can be used to evaluate the results of an exam. (Dutch only for now)FTToolkit for WinRT: FTToolkit for WinRT is the Windows RT Version of FTToolkit. The framework is designed for using in Metro Style Apps on Windows 8HydroServer Lite: HydroServer Lite is a lightweight version of the CUAHSI HydroServer written in PHP. It can be run on any webhosting service that supports PHP and MySQL. ImmunityBusterWP7: We will let you know about this soon...JavaVM for small microcontrollers: uJavaVM is a Java virtual machine for small microcontrollers written in portable C. The project is organized to support multiple microcontroller platforms. Jaw-Breaker: Hello,JIMS: Jangids Inventory Management SystemLee's Simple HTA Template: A SIMPLE HTA that can be used in various scripts and OSD. Very Minimal, intended to be used a starting template.Log4netExtensions: Log4net extensions for none static classMedical information Management System: The Medical Information Management System (MIMS) is a comprehensive solution for the various stakeholders in Medical Industry in the Nigeria.Minimap XNA game component for TemporalWars Indie Game Engine: This Minimap XNA Component is designed to show unit movement, structures placed in the game world, and take direct orders (Windows Platform). MusicCreator: This is a program that make sounds. Just open a sound and add into the program. then press the record button and the sound will be recorded.Osnova CMS: Osnova CMS, content management frameworkPicBin: PicBin is like PasteBin only that it is for pics. The Project uses .NET 4.5, ASP.NET MVC 4 and HTML5.PowerShell scripts to enable and disable tracing for Microsoft Dynamics CRM 2011: These are PowerShell scripts (files). There are 2 scripts in the zip file "CRM2011EnableDisableTrace.zip".PrepareQuery for MVC: ??????????? ??? ASP.NET MVC. Central.Linq.Mvc ?????? ?????????? ??? Central.Linq, ??????? ????????? ??????????? ???????????? ??????? ?? ?????? ?? ??????.Proggy: This will be a code-driven CMS for .NET developers. If I ever get the time to finish it.QuickSL: SL????, ??:??????、??????。 ?UI???????Temporal-Wars XNA Indie Game Engine: Temporal Wars 3D Engine includes a full suite of WYSIWG tools designed for rapid creation of your game world. Created for myself (Ben), now available for FREE.VIPER: VIPER RuntimeWeibo API library: Sina Weibo API library is NOT based on OAuth, but it is much more powerful in operating weibo activities via program. Major usage: 1. Rebot 2. Archive content

    Read the article

  • Unix/Linux find and sort by date modified

    - by Richard Easton
    How can I do a simple find which would order the results by most recently modified? Here is the current find I am using (I am doing a shell escape in PHP, so that is the reasoning for the variables): find '$dir' -name '$str'\* -print | head -10 How could I have this order the search by most recently modified? (Note I do not want it to sort 'after' the search, but rather find the results based on what was most recently modified.)

    Read the article

  • Unable to delete a file using bash script

    - by user3719091
    I'm having problems removing a file in a bash script. I saw the other post with the same problem but none of those solutions solved my problem. The bash script is an OP5 surveillance check and it calls an Expect process that saves a temporary file to the local drive which the bash script reads from. Once it has read the file and checked its status I would like to remove the temporary file. I'm pretty new to scripting so my script may not be as optimal as it can be. Either way it does the job except removing the file once it's done. I will post the entire code below: #!/bin/bash #GET FLAGS while getopts H:c:w: option do case "${option}" in H) HOSTADDRESS=${OPTARG};; c) CRITICAL=${OPTARG};; w) WARNING=${OPTARG};; esac done ./expect.vpn.check.sh $HOSTADDRESS #VARIABLES VPNCount=$(grep -o '[0-9]\+' $HOSTADDRESS.op5.vpn.results) # Check if the temporary results file exists if [ -f $HOSTADDRESS.op5.vpn.results ] then # If the file exist, Print "File Found" message echo Temporary results file exist. Analyze results. else # If the file does NOT exist, print "File NOT Found" message and send message to OP5 echo Temporary results file does NOT exist. Unable to analyze. # Exit with status Critical (exit code 2) exit 2 fi if [[ "$VPNCount" > $CRITICAL ]] then # If the amount of tunnels exceeds the critical threshold, echo out a warning message and current threshold and send warning to OP5 echo "The amount of VPN tunnels exceeds the critical threshold - ($VPNCount)" # Exit with status Critical (exit code 2) exit 2 elif [[ "$VPNCount" > $WARNING ]] then # If the amount of tunnels exceeds the warning threshold, echo out a warning message and current threshold and send warning to OP5 echo "The amount of VPN tunnels exceeds the warning threshold - ($VPNCount)" # Exit with status Warning (exit code 1) exit 1 else # The amount of tunnels do not exceed the warning threshold. # Print an OK message echo OK - $VPNCount # Exit with status OK exit 0 fi #Clean up temporary files. rm -f $HOSTADDRESS.op5.vpn.results I have tried the following solutions: Create a separate variable called TempFile that specifies the file. And specify that in the rm command. I tried creating another if statement similar to the one I use to verify that file exist and then rm the filename. I tried adding the complete name of the file (no variables, just plain text of the file) I can: Remove the file using the full name in both a separate script and directly in the CLI. Is there something in my script that locks the file that prevents me from removing it? I'm not sure what to try next. Thanks in advance!

    Read the article

  • Setting up APACHE-JAMES Server

    - by venomrld
    I am trying to setup Apache-James Server on Ubuntu, in which I am getting the annoying error: JAVA_HOME not defined correctly We cannot execute I have already referred to the documentations and set up the PATH and JAVA_HOME variables correctly in the /etc/profile file. Upon calling echo, I get the values in the output screen. Where am I missing? echo $JAVA_HOME /usr/local/jdk1.6.0_27 echo $PATH $PATH:/usr/local/jdk1.6.0_27/bin Please Help !!

    Read the article

  • Websphere - Performance Monitoring Infrastructure servlet login GET

    - by virtual-lab
    I am trying to make an http call to the WebSphere PMI servlet. Websphere has security enabled and therefore I am asked to enter user credentials in order to display the xml. What actually doesn't work as I expect is that username and password in the url are not recognized and the BASIC authorization form is displayed. Obviously it doesn't work from a third party application point of view, I need to pass those variables as GET request. Any suggestion?

    Read the article

  • HISTCONTROL=ignoreboth not working Debian Lenny

    - by Mike
    Can anybody confirm if by setting the the following environmental variables under debian lenny will make previous history entries not to be saved. GNU bash, version 3.2.39(1)-release export HISTCONTROL=ignoreboth export HISTSIZE=500 I have added them to my /etc/bash.bashrc but I keep getting repeated commands. Thanks

    Read the article

  • MySQL: Replicating the MySQL database

    - by Lee
    Hi guys, I have a primary write server (server1) which replications to two servers (server2 and server3) which are query servers. I am replicating all databases to these servers including the MySQL database. When i execute a GRANT as follows replication works perfectly.. GRANT execute,select ON database1.* TO `user1`@`host` IDENTIFIED BY 'password'; However if i did the same GRANT to alter permissions on an existing user without IDENTIFIED clause replication breaks.. Error 'Can't find any matching row in the user table' on query. Default database: 'mysql'. Query: 'GRANT execute,select ON database1.* TO `user`@`host`' If I try and run the query manually i get the same error.. Server 1: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+------------------------------------------------------------+ | Variable_name | Value | +-------------------------+------------------------------------------------------------+ | protocol_version | 10 | | version | 5.0.77-log | **my.cnf** [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql old_passwords=1 symbolic-links=0 max_allowed_packet = 100M log-bin = /var/lib/mysql/logs/borg-binlog.log max_binlog_size=50M expire_logs_days=7 [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid Server 2: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+------------------------------------------------------------+ | Variable_name | Value | +-------------------------+------------------------------------------------------------+ | protocol_version | 10 | | version | 5.0.77-log | my.cnf server-id=12 master-host=x master-user=x master-password=x master-connect-retry=60 relay-log=/var/lib/mysql/borg-relay.log relay-log-index=/var/lib/mysql/borg-relay-log.index Thanks for taking a look Edit: Currently its running fine, until you do the grant which breaks it... mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 10.128.0.5 Master_User: repli-ragnarok Master_Port: 3306 Connect_Retry: 60 Master_Log_File: borg-binlog.002730 Read_Master_Log_Pos: 4375760 Relay_Log_File: borg-relay.005489 Relay_Log_Pos: 4375899 Relay_Master_Log_File: borg-binlog.002730 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 4375760 Relay_Log_Space: 4375899 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 1 row in set (0.00 sec) Edit: Broken show slave status from history +----------------------------------+-------------+----------------+-------------+---------------+--------------------+---------------------+-------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+ | Slave_IO_State | Master_Host | Master_User | Master_Port | Connect_Retry | Master_Log_File | Read_Master_Log_Pos | Relay_Log_File | Relay_Log_Pos | Relay_Master_Log_File | Slave_IO_Running | Slave_SQL_Running | Replicate_Do_DB | Replicate_Ignore_DB | Replicate_Do_Table | Replicate_Ignore_Table | Replicate_Wild_Do_Table | Replicate_Wild_Ignore_Table | Last_Errno | Last_Error | Skip_Counter | Exec_Master_Log_Pos | Relay_Log_Space | Until_Condition | Until_Log_File | Until_Log_Pos | Master_SSL_Allowed | Master_SSL_CA_File | Master_SSL_CA_Path | Master_SSL_Cert | Master_SSL_Cipher | Master_SSL_Key | Seconds_Behind_Master | +----------------------------------+-------------+----------------+-------------+---------------+--------------------+---------------------+-------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+ | Waiting for master to send event | 10.128.0.5 | repli-valhalla | 3306 | 60 | borg-binlog.002729 | 40429793 | borg-relay.005486 | 40311514 | borg-binlog.002729 | Yes | No | | | | | | | 1133 | Error 'Can't find any matching row in the user table' on query. Default database: 'mysql'. Query: 'GRANT execute,select ON auth_tracker.* TO `mail-sin1`@`%.sin1.netline.net.uk` IDENTIFIED BY 'mail-sin1666'' | 0 | 40311375 | 40429932 | None | | 0 | No | | | | | | NULL | +----------------------------------+-------------+----------------+-------------+---------------+--------------------+---------------------+-------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+ 1 row in set (0.06 sec)

    Read the article

  • re-use '~/.profile` for Fish?

    - by Albert
    (I'm talking about the shell Fish, esp. Fish's Fish.) For Bash/ZSH, I had ~/.profile with some exports, aliases and other stuff. I don't want to have a separate config for environment variables for Fish, I want to re-use my ~/.profile. How? In The FAQ, it is stated that I can at least import those via /usr/local/share/fish/tools/import_bash_settings.py, however I don't really like running that for each Fish-instance.

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • PHP & MySQL on Mac OS X: Access denied for GUI user

    - by Eirik Lillebo
    Hey! This question was first posted to Stack Overflow, but as it is perhaps just as much a server issue I though it might be just as well to post it here also. I have just installed and configured Apache, MySQL, PHP and phpMyAdmin on my Macbook in order to have a local development environment. But after I moved one of my projects over to the local server I get a weird MySQL error from one of my calls to mysql_query(): Access denied for user '_securityagent'@'localhost' (using password: NO) First of all, the query I'm sending to MySQL is all valid, and I've even testet it through phpMyAdmin with perfect result. Secondly, the error message only happens here while I have at least 4 other mysql connections and queries per page. This call to mysql_query() happens at the end of a really long function that handles data for newly created or modified articles. This basically what it does: Collect all the data from article form (title, content, dates, etc..) Validate collected data Connect to database Dynamically build SQL query based on validated article data Send query to database before closing the connection Pretty basic, I know. I did not recognize the username "_securityagent" so after a quick search I came across this from and article at Apple's Developer Connection talking about some random bug: Mac OS X's security infrastructure gets around this problem by running its GUI code as a special user, "_securityagent". Then I tried put a var_dump() on all variables used in the mysql_connect() call, and every time it returns the correct values (where username is not "_securityagent" of course). Thus I'm wondering if anyone has any idea why 'securityagent' is trying to connect to my database - and how I can keep this error from occurring when I call mysql_query(). Update: Here is the exact code I'm using to connect to the database. But a little explanation must follow: The connection error happens at a call to mysql_query() in function X in class_1 class_1 uses class_2 to connect to database class_2 reads a config file with the database connection variables (host, user, pass, db) class_2 connect to the database through the following function: var $SYSTEM_DB_HOST = ""; function connect_db() { // Reads the config file include('system_config.php'); if (!($SYSTEM_DB_HOST == "")) { mysql_connect($SYSTEM_DB_HOST, $SYSTEM_DB_USER, $SYSTEM_DB_PASS); @mysql_select_db($SYSTEM_DB); return true; } else { return false; } }

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >