Search Results

Search found 11547 results on 462 pages for 'parameter binding'.

Page 289/462 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • crontab environment

    - by Adamski
    I have written various scripts to launch Java server applications, which are typically run for 24 hours before being shut down (by invoking the same script with a different parameter). The script relies on environment variables defined in a file: ~/<user>.env, which I source from .bashrc. This works fine when invoking the script from the command line but if I want to add the script as a crontab entry I run into the problem where .bashrc isn't read. My question: What is the best practice approach for solving this problem? I realise I could define a crontab entry such as: * * * * 1-5 /usr/bin/bash -c '. /home/myuser/myuser.env && /home/myuser/scripts/myscript.sh' ... but this seems plain ugly. Alternatively I could source myuser.env at the beginning of every script, but this would become a nightmare to maintain. Any help appreciated.

    Read the article

  • approx via inetd is not open to connection for others machines

    - by Cédric Girard
    I have an approx server to speed up Debian apt updates, on my Ubuntu 11.04 desktop PC, it had ran fine in the past, but today le 9999 port is open from localhost, but not for others PC. I have not modified inetd configuration at all. What can I check and try? inetd.conf 9999 stream tcp nowait approx /usr/sbin/approx /usr/sbin/approx approx.com # Here are some examples of remote repository mappings. # See http://www.debian.org/mirror/list for mirror sites. debian http://ftp2.fr.debian.org/debian security http://security.debian.org/debian-security volatile http://volatile.debian.org/debian-volatile # The following are the default parameter values, so there is # no need to uncomment them unless you want a different value. # See approx.conf(5) for details. $cache /espace/Dossiers/approx $max_rate unlimited $max_redirects 5 $user approx $group approx $syslog daemon $pdiffs true $offline false $max_wait 10 $verbose false $debug false I tried to allow others PC to connect with a "ALL: ALL" in hosts.allow. ufw is disabled, iptables-save is empty.

    Read the article

  • Linux: How to break a large file into smaller files?

    - by Runcible
    I have a giant file (20 gigs) sitting on my source machine and I need to transfer it to my target machine. For the purposes of this question, let's assume that I do not have network connectivity between the two machines. I need to break this file into a series of smaller files, write the smaller files to DVD(s), then re-assemble everything on the target machine. Both source and destination machines are Linux boxes. Is there a way to accomplish this using tar? I have a feeling that I need to use the --multi-volume parameter. What are my options? I need to be able to specify the size of the volume files, in order to make sure that each one will fit onto a single DVD. Thanks!

    Read the article

  • Restore access to Cisco Connect after changing router settings

    - by StasM
    I have recently bought Cisco Valet Plus (M20) wireless router (which I recognize now was a mistake, but nevermind). It has two setup options - Cisco Connect software and web-based setup. Cisco Connect software allows changing very small set of settings, web-based setup allows access to almost all settings, except settings for guest network. The problem is that when I use web-based setup, Cisco Connect after some changes refuses to talk to the router, so I can't change guest settings anymore (since web interface doesn't allow to change them). It must be because of some config parameter not matching or some password set wrong - but I don't know where Cisco Connect stores them. So, does anybody have any idea how to make Cisco Connect talk to the router again once I changed the settings through the web interface?

    Read the article

  • Word Macro: Move Cursor Down a Row

    - by Bryan
    I have a macro which I've been using to merge two cells together in a word table, but what I want to do is to get the cursor to move down by one cell, so that I can repeatedly press the shortcut key to repeat the command over and over. The macro code that I have (shamelessy copied and pasted from a web page), is as follows: Sub MergeWithCellToRight() ' ' MergeWithCellToRight Macro ' ' Dim oRng As Range Dim oCell As Cell Set oCell = Selection.Cells(1) If oCell.ColumnIndex = Selection.Rows(1).Cells.Count Then MsgBox "There is no cell to the right?", vbCritical, "Error" Exit Sub End If Set oRng = oCell.Range oRng.MoveEnd wdCell, 1 oRng.Cells.Merge Selection.Collapse wdCollapseStart End Sub I've attempted to add the following line just before the 'End Sub' statement Selection.MoveDown wdCell, 1 but this generates the error, Run-time error '4120' Bad Parameter whenever I execute the macro. Can anyone tell me how to correct this or what I'm doing wrong?

    Read the article

  • Can't get powershell to return where results from GCI using ACL

    - by Rossaluss
    I'm trying to get Powershell to list files in a directory that are older than a certain date and match a certain user. I've got the below script so far which gives me all the files older than a certain date and lists the directory and who owns them: $date=get-date $age=$date.AddDays(-30) ls '\\server\share\folder' -File -Recurse | ` where {$_.lastwritetime -lt "$age"} | ` select-object $_.fullname,{(Get-ACL $_.FullName).Owner} | ` ft -AutoSize However, when I try and use an additional where parameter to select only files owned by a certain user, I get no results at all, even though I know I should, based on the match I'm trying to obtain (as below): $date=get-date $age=$date.AddDays(-30) ls '\\server\share\folder' -File -Recurse | ` where ({$_.lastwritetime -lt "$age"} -and {{(get-acl $_.FullName).owner} -eq "domain\user"}) | ` select-object $_.fullname,{(Get-ACL $_.FullName).Owner} | ` ft -AutoSize Am I missing something? Can I not use the get-acl command in a where condition as I've tried to? Any help would be appreciated. Thanks

    Read the article

  • como se que esta funcionando AWE?

    - by adan851018
    Tengo un servidor con 8GB de RAM de 32 bits, instale windows server 2003 enterprise y sql server 2005 enterprise, el Sistema Operativo reconoce los 8GB, pero el Sql Server solo usaba 1.6 GB de memoria, entonces agregue al boot.ini el parametro /PAE y en Sql Server le habilite AWE de 1GB a 5GB, despues de reiniciar el servidor el Sistema que usa la BD estaba mas rapido mejoro bastante, pero en memoria el Sql Server solo usa 100 MB, veo la cantidad de memoria usada en el Administrador de Tareas de Windows, y no se si esta funcionando AWE o que sucedio? Rough translation: I have a server with 8GB of RAM 32-bit, install windows server 2003 enterprise and sql server 2005 enterprise, the operating system recognizes the 8GB, but the SQL Server only using 1.6 GB of memory, then add to boot.ini the parameter / PAE and will enable AWE Sql Server 1GB to 5GB, after rebooting the server using the database system was faster, the better lot, but in memory the SQL Server only uses 100 MB, I see the amount of memory used in Manager Windows Tasks, and not if AWE is running or what happened?

    Read the article

  • Installing a new SQL Server instance fails

    - by Rubio
    I've previously in my setup installed SQL Server Express 2005. Now I've switched to SQL Server Express 2008. I updated the command line parameters to those documented for the latter. If the comp already has SQL Server Express 2008 installed, my installer should create a new instance. The command line parameters are as follows: /ACTION=Install /FEATURES=SQLEngine /QS /INSTANCENAME=ABCD /SECURITYMODE=SQL /SAPWD=CunningPassword The requested instance name does not exist on the target machine. This will end in an error -2068643838. The logs show the following error: "No features were installed during the setup execution. The requested features may already be installed." If I remove the /QS parameter and try to install interactively, I'll get as far as the Feature Selection page. The UI shows three options, Instance Features, Shared Features and Redistributable Features. Whatever I select, clicking Next results in the same error (There are validation errors on this page). Any ideas anyone?

    Read the article

  • Environment variables in Weblogic Managed Server with SSL nodemanager

    - by Eric Darchis
    We have a C legacy application start with JNI that requires environment variables. Not java -Djava.library.path -Dvar=foo as these are purely java. I need real environment variables. When we setup our domains, we usually use the SSH method to start the node managers. This works fine and the env variables are set properly. Recently the sysadmin has decided for a few reasons to use the SSL mode for nodemanagers. The servers start but the environment variables are not set. I checked with "pargs -e" (this is a Solaris machine) that the env variable was indeed not present from the nodemanager and for the managed server. Is SSL starting the managed server without running the .sh scripts or I am missing a parameter somewhere ?

    Read the article

  • *.example.com wildcard domain can be parsed from a single page?

    - by Sean Kean
    For a domain 'example.com' - what is the easiest way to set up a wildcard dns (*.example.com), hosting, and htaccess/httpd.conf/virtualhost, and script on a page so that: how.do.i.setup.a.site.with.wildcards.like.this.example.com or anything.that.is.given.as.a.subdomain.for.example.com is rendered by a page at example.com/index.html - yet keeps the wildcard subdomain in the URL bar and passes the full URL as a parameter for rendering tags in HTML? An example tag is a Facebook comment: { div class="fb-comments" data-href="http://how.do.i.setup.a.site.with.wildcards.like.this.example.com" data-num-posts="2" data-width="500" } I just opened a hosting account with spry.com and have a VPS running Ubuntu 11.04-x86-LAMP - Essentially, what is the most straightforward way of doing this? Thanks so much. (I originally posted this over on stackoverflow but realize its more of a serverfault question)

    Read the article

  • In Nginx, can I handle both a location:url or a content-type: text/html response from memcached?

    - by Sean Foo
    I'm setting up an nginx - apache reverse proxy where nginx handles the static files and apache the dynamic. I have a search engine and depending on search parameter I either directly forward the user to the page they are looking for or provide a set of search results. I cache these results in memcached as key:/search.cgi?q=foo value: LOCATION:http://www.example.com/foo.html and key:/search.cgi?q=bar value: CONTENT-TYPE: text/html <html> .... .... </html> I can pull the "Content-type...." values out of memcached using nginx and send them to the user, but I can't quite figure out how to handle a returned value like "Location..." Can I?

    Read the article

  • Windows Server 2008: Limit UDP/TCP packets per IP or ban

    - by WBAR
    How I can limit UDP/TCP packets per IP send to my host (or better PORT) per second or minute ? Would be nice to ban that IP for 12/24 hours or even for ever. I got Windows Server 2008 and I'm very poor in Windows administration but quite good in Linux. EDIT: By basic problem is that They sending a lot of rubbish UPD and TCP packets.. TCP packets without SYNCH, fragmented UDP packets so my servers stop responding.. So I need to cut off users (IPs) sending more than X packets per second. I need solution witch provides me, somehow, configurable: X packets of certain type (UDP, TCP or both - lets say parameter named Z ) are allowed to be received by IP on Y port, otherwise this packet should be DROPPED. My virtual hosts are hosted by VirtualBox and I'm able to forward all incoming packets certain type and certain port to the specific Virtual Host, but I need to DROP them before my VirtualBox receive them.

    Read the article

  • how to prevent log output from PostgreSQL stored procedure ?

    - by ssc
    I am running a number of PostgreSQL scripts that used to produce excessive log output. I managed to reduce most of the output to an acceptable amount by passing --quiet as parameter to the psql command line client and adding SET client_min_messages='warning'; to the beginning of my SQL scripts. This works fine for most basic statements like SELECT, INSERT, UPDATE, etc.) However, when I call a stored function in a script using e.g. SELECT my_func(my_args);, there is still log output similar to my_func (omitted a long with many '-' here because SF thinks that's a headline) (1 row) The output is useless; it only makes me having to scroll back up a long way after the script has run and also makes it much harder than necessary to spot any relevant error output. How can I get rid of it ?

    Read the article

  • Best practices for setting lm-factor in Squid refresh patterns

    - by Mpentecost
    I am running a Squid (3.1) cache in front of Django. The content of the site does not change very often, so Squid gives our backend much needed breathing room. Currently, this is the refresh pattern that we are using to cache the content: refresh_pattern . 60 100% 60 We basically want to cache everything for at least an hour (and only an hour) before Squid then re-validates the content. My question is on the "100%" parameter, which sets the lm-factor. I'm not sure if setting that to 100% is doing what we want it to. The assumption was that by setting it to 100%, it would ensure that objects stay in the cache for the max cache time. Is this an incorrect assumption? What are the best practices that one should follow when setting up a refresh pattern like this?

    Read the article

  • FTP "PUT" fails from Virtual Machine, but not host PC: 504 Command not implemented for that paramete

    - by BrianH
    I have an FTP Script I'm using to automate a file transfer. The transfer works fine on my PC (XP SP2), but when I try and run it on a VM on my PC (XP SP2), the "put" commands gives off: 504 Command not implemented for that parameter. FTP File: open [ftp site] [username] [password] cd [directory on FTP server] binary hash put ..\[subfolder1]\[Subfolder2]\[subfolder3]\[filename] bye The FTP site/server is around the world, and not under my control. From what I understand of a 504, that means the command should NEVER work, but since the same script DOES work on my PC (hosting the VM), that eliminates syntax, file naming, etc. The put command when triggered from the VM, actually creates a 0 length file on the target FTP server, but doesn't populate the file.

    Read the article

  • WGet or cURL: Mirror Site from http://site.com And No Internal Access

    - by alharaka
    I have tried wget -m wget -r and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites.

    Read the article

  • Staggering java linux process startup to prevent OOM

    - by ctennis
    I am running a number of java processes on a single Linux machine. From a memory and computing standpoint, everything is fine when things are static. However, periodically we use a configuration management package up upgrade the jar or war files, and restart the java process. The problem is, that is restarts them all relatively quickly, and so we get 10 or so java VMs restarting all at the same time (we use daemontools for the service stops/starts), which wreaks havoc on the machine, in terms of OOMs or just really slow. This is because it's spawning the JVM 10x at the same time. Other than trying to stagger the startups, is there a smarter way of handling this? Maybe a sysctl tuning performance parameters, or a JVM parameter?

    Read the article

  • Exchange 2010 Room Mailbox Calendar Permissions

    - by Brian Mitchell
    Exchange 2010 sp2 Outlook 2007/2010 Server 2008 I have managed to set up several room mailboxes in exchange, people are able to book the rooms and they get a response from the exchange server. this is brilliant. however users are unable to view the calendar of the room mailbox to see what times are available, in a ideal world I would like users to only see if the room is free or not. I dont want users to see the details of the meeting (title, description etc) I have been trying to do this using the following command Add-MailboxFolderPermission -Identity meetingroom -User "Usergroup" -AccessRights AvailabilityOnly -DomainController AD-Server This throws the following error Specified argument was out of the range of valid values. Parameter name: memberRights + CategoryInfo : NotSpecified: (meetingroom:MailboxFolderIdParameter) [Add-MailboxFolderPermission], Argum entOutOfRangeException + FullyQualifiedErrorId : CBC6516F,Microsoft.Exchange.Management.StoreTasks.AddMailboxFolderPermission Any help on the situation would be brilliant, i have been trying to get this done for a couple of days and im going around in circles.

    Read the article

  • Samba as a PDC and offline authentication

    - by Aimé Barteaux
    Say I have a Windows laptop which has been connected to a domain. The domain has a Samba server as a PDC. Now say that I move the laptop outside of the network (the network is completely inaccessible). Will I be able to logon into accounts I have accessed before on the laptop (through GINA)? Update: Looking at the smb.comf documentation I noticed the setting winbind offline logon: This parameter is designed to control whether Winbind should allow to login with the pam_winbind module using Cached Credentials. If enabled, winbindd will store user credentials from successful logins encrypted in a local cache.. To me it looks like this solves the issue but can anyone else confirm it and/or point out if any additional values need to be set?

    Read the article

  • What are best monitoring tool customizable for cluster / distributed system?

    - by Adil
    I am working on a system having multiple servers. I am interested in monitoring some server specific data like CPU/memory usage, disk/filesystem usage, network traffic, system load etc. and some other my process specific data. What are available open source that can serve my purpose? If it provides to customize the parameter to be monitored and monitor your own data by creating plugin / agent. Any suggestions? I heard of Nagios, Zabbix and Pandora but not sure if they provide such interface.

    Read the article

  • Mapping skydrive as network drive in macos

    - by vittore
    as you probably know, if you have windows live account you can use free skydrive 25 gb storage. Even more a lot of people know that if you go to your skydrive in browser and copy cid query parameter value (https://...live.com/...&cid=xxxxxxxx ) you will be able to map skydrive as network drive in windows using this network pass \[cid].docs.live.net[cid]\ I do now that if you have network share like \server\folder i can map it in macos too, as smb://server/folder. however it is doesn't seem to be a case with skydrive when i try to map it as smb://[cid].docs.live.net/[cid] finder tells it can't connect. Anyone know how to map it ?

    Read the article

  • OpenVPN route missing

    - by dajuric
    I can connect to an OpenVPN server from Windows without any problems. But when I try to connect from Ubuntu 12.04 (start OpenVPN) I receive the following: OpenVPN needs a gateway parameter for a --route option and no default was specified by either --route-gateway or --ifconfig options SERVER IP: 161.53.X.X internal network: 10.0.0.0 / 8 What I need to do ? client configuration: client dev tap proto udp remote 161.53.X.X 1194 resolv-retry infinite nobind ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3 server conf: local 161.53.X.X port 1194 proto udp dev tap dev-node OpenVPN ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem # DHCP leases addresses to clients server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 10.0.0.1 255.255.0.0" client-to-client duplicate-cn keepalive 10 120 comp-lzo verb 6

    Read the article

  • Tomcat startup.sh doesn't work

    - by OMG Ponies
    I've just installed Tomcat 6.0.20 (per Jira documentation recommendation) on RedHat EL 5 Server, and attempts to use: bin] # ./startup.sh ...result in: Using CATALINA_BASE: /opt/software/jira-tomcat-6.0.20 Using CATALINA_HOME: /opt/software/jira-tomcat-6.0.20 Using CATALINA_TMPDIR: /opt/software/jira-tomcat-6.0.20/temp Using JRE_HOME: /etc/alternatives/jre Usage: catalina.sh ( commands ... ) commands: debug Start Catalina in a debugger ... version What version of tomcat are you running? I've edited the catalina.sh file to add: echo $0 echo $1 ...and I see: catalina.sh start ...etc when I use: ./catalina.sh start Why does catalina.sh not use the parameter?

    Read the article

  • How to pass parameters to a function?

    - by sbi
    I need to process an SVN working copy in a PS script, but I have trouble passing arguments to functions. Here's what I have: function foo($arg1, $arg2) { echo $arg1 echo $arg2.FullName } echo "0: $($args[0])" echo "1: $($args[1])" $items = get-childitem $args[1] $items | foreach-object -process {foo $args[0] $_} I want to pass $arg[0] as $arg1 to foo, and $arg[1] as $arg2. However, it doesn't work, for some reason $arg1 is always empty: PS C:\Users\sbi> .\test.ps1 blah .\Dropbox 0: blah 1: .\Dropbox C:\Users\sbi\Dropbox\Photos C:\Users\sbi\Dropbox\Public C:\Users\sbi\Dropbox\sbi PS C:\Users\sbi> Note: The "blah"parameter isn't passed as $arg1. I am absolutely sure this is something hilariously simple (I only just started with doing PS and still feel very clumsy), but I have banged my head against this for more than an hour now, and I can't find anything.

    Read the article

  • Nginx proxy to Apache - resolve HTTP ORIGIN

    - by Fratyr
    I have a server setup with nginx serving static content and proxy all PHP/dynamic requests to apache on 127.0.0.1 I'm building an API for my databases, and I need to allow clients by their origin (domain name), rather than just IP. Based on CORS rules. So when I send an HTTP header header("Access-Control-Allow-Origin: www.client-requesting.myapi.com"); from my API server, I have to tell it which origin I allow, otherwise client side requests won't work to my API due to same-origin policy. The question is how can I know which domain name (if any) called my API? What should be the nginx and apache configuration to pass the origin parameter? I tried to google, and all I found is some possible solution with mod_rpaf, but I wanted to be sure. Thanks!

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >