Search Results

Search found 43911 results on 1757 pages for 'app directory'.

Page 474/1757 | < Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >

  • Error installing php extension OAuth via pecl

    - by PJ
    I'm trying to install the php extension OAuth in my local environment. php.net suggests it's super easy. You just run pecl install oauth. I tried this, and here is the output in terminal: downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize grep: /usr/include/php/main/php.h: No such file or directory grep: /usr/include/php/Zend/zend_modules.h: No such file or directory grep: /usr/include/php/Zend/zend_extensions.h: No such file or directory Configuring for: PHP Api Version: Zend Module Api No: Zend Extension Api No: Cannot find autoconf. Please check your autoconf installation and the $PHP_AUTOCONF environment variable. Then, rerun this script. ERROR: `phpize' failed Any tips on how to fix the errors and install OAuth succesfully? I'm on Mac OS X 10.6.3 Thanks!

    Read the article

  • SCCM Client Push FAIL - Win2000 box

    - by ajp
    Hello, When trying to install the SCCM client onto a Windows 2000 box, the install fails. The install script is run through a batch file (CONTENTS: \mdop\SCCM_client\ccmsetup.exe /mp:MDOP /logon smssitecode=MID smsslp=MDOP) hosted on a public area of the network. This script has worked for all machines (mostly Win2003 Server). I've tried enabling all the common services it requires (BITS, IIS Admin, Windows Installer), but it still only runs for a second or two then quits. Here's the piece of the log file where it errors out: [LOG[Couldn't get directory list for directory 'http://MDOP/CCM_Client/ClientPatch'. This directory may not exist.]LOG]! time="13:55:53.618+300" date="06-30-2009" component="ccmsetup" context="" type="0" thread="1676" file="ccmsetup.cpp:6054" Full Log: http://paste-it.net/public/gb11732/

    Read the article

  • SSH - using keys works, but not in a script

    - by Garfonzo
    I'm kind of confused, I have set up public keys between two servers and it works great, sort of. It only works if I ssh manually from a terminal. When I put the ssh command into a python script, it asks me for a password to login. The script is using rsync to sync up a directory from one server to the other. manual ssh command that works, no password prompt, automatic login: ssh -p 1234 [email protected] In the Python script: rsync --ignore-existing --delete --stats --progress -rp -e "ssh -p 1234" [email protected]:/directory/ /other/directory/ What gives? (obviously, ssh details are fake)

    Read the article

  • ODEE Green Field (Windows) Part 5 - Deployment and Validation

    - by AndyL-Oracle
    And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over the basic process that I intended to show with installing an ODEE on a green field server. I walked you through the basic installation of Oracle 11g database In part 2, I covered the installation of WebLogic application server. In part 3, I showed you how to install SOA Suite for WebLogic. In part 4, we did the first part of the installation of ODEE itself. What remains after all of that, is the deployment of the ODEE components onto the database and application server - so let's get to it! DATABASE First, we'll deploy the schemas to the database. The schemas are created during the ODEE installation according to the responses provided during the install process. To deploy the schemas, you'll need to login to the database server in your green field environment. Open a command line and CD into ODEE_HOME\documaker\database\oracle11g.Run SQLPLUS as SYSDBA and execute dmkr_admin.sql:  sqlplus / as sysdba @dmkr_admin.sql Execute dmkr_asline.sql, dmkr_admin_correspondence_example.sql.  If you require additional languages, run the appropriate SQL scripts (e.g. dmkr_asline_es.sql for Spanish). APPLICATION SERVER Next, we'll deploy the WebLogic domain and it's components - Documaker web services, Documaker Interactive, Documaker dashboard, and more. To deploy the components, you'll need to login to the application server in your green field environment. 1. Open Windows Explorer and navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts.2. Using a text editor such as Notepad++, modify weblogic_installation_properties and set location of MIDDLEWARE_HOME and ODEE HOME. If you have used the defaults you’ll probably need to change the E: to C: and that’s it. Save the changes.3. Continuing in the same directory, use your text editor to modify set_middleware_env.cmd and set the drive and path to MIDDLEWARE_HOME. If you have used the defaults you’ll probably need to just change E: to C: and that’s it. Save the changes.4. In the same directory, execute wls_create_domain.cmd by double-clicking it. This should run to completion. If it does not, review any errors and correct them, and rerun the script.5. In the same directory, execute wls_add_correspondence.cmd by double-clicking it - again this should run to completion. 6. Next, we'll start the AdminServer - this is the main WebLogic domain server. To start it, use Windows Explorer and navigate to MIDDLEWARE_HOME\user_projects\domains\idocumaker_domain. Double-click startWebLogic.cmd and the server startup will begin. Once you see output that indicates that the server status changed to RUNNING you may proceed.  a. Note: if you saw database connection errors, you probably didn’t make sure your database name and connection type match. You can change this manually in the WebLogic Console. Open a browser and navigate to http://localhost:7001/console (replace localhost with the name of your application server host if you aren't opening the browser on the server), and login with the the weblogic credential you provided in the ODEE installation process. b. Once you're logged in, open Services?Data Sources. Select dmkr_admin and click Connection Pool.  c. The end of the URL should match the connection type you chose. If you chose ServiceName, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<serviceName> and if you chose SID, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<SIDname> d. An example serviceName is a fully qualified DNS-style name, e.g. "idmaker.us.oracle.com". (It does not need to actually resolve in DNS). An example SID is just a name, e.g. IDMAKER. e. Save the change and repeat for the data source dmkr_asline.  f. You will also need to make the same changes in the ODEE_HOME/documaker/docfactory/config/context/.bindings file - open the file in a text editor, locate the URL lines and make the appropriate change, then save the file.  7. Back in the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory, execute create_users_groups.cmd. 8. In the same directory, execute create_users_groups_correspondence_example.cmd. 9. Open a browser and navigate to http://localhost:7001/jpsquery. Replace localhost with the name of your application server host if you aren't running the browser on the application server. If you changed the default port for the AdminServer from 7001, use the port you changed it to. You should see output like this: 10. Start the WebLogic managed servers by opening a command prompt and navigating to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/. When you start the servers listed below, you will be prompted to enter the WebLogic credentials to start the server. You can prevent this by providing the credential in the startManagedwebLogic.cmd file for the WLS_USER and WLS_PASS values. Note that the credential will be stored in cleartext. To start the server, type in the command shown. a. Start the JMS Server: ./startManagedWebLogic.cmd jms_server b. Start Dashboard/Documaker Administrator: ./startManagedWebLogic.cmd dmkr_server c. Start Documaker Interactive for Correspondence: ./startManagedWebLogic.cmd idm_server SOA Composites  If you're planning on testing out the approval process components of BPEL that can be used with Documaker Interactive, then use the following steps to deploy the SOA composites. If you're not going to use BPEL, you can skip to the next section.1. Stop the servers listed in the previous section (Step 10) in the reverse order that they were started.2. Run the Domain configuration command: navigate to and execute MIDDLEWARE_HOME/wlserver_10.3/common/bin/config.cmd.3. Select Extend and click next. 4. Select the iDocumaker Domain and click Next. 5. Select the Oracle SOA Suite – 11.1.1.0 (this may automatically select other components which is OK). Click Next. 6. View the Configure JDBC resources screen. You should not make any changes. Click Next. 7. Check both connections and click Test Connections. After successful test, click Next. If the tests fail, something is broken. Go back to configure JDBC resources and check your service name/SID. 8. Check all schemas. Set a password (will be the same for all schemas). Enter the database information (service name, host name, port). Click Next. 9. Connections should test successfully. If not, go back and fix any errors. Click Next. 10. Click Next to pass through Optional Configuration. 11. Click Extend. 12. Click Done. 13. Open a terminal window and navigate to/execute: ODEE_HOME/documaker/j2ee/weblogic/oracle11g/bpel/antbuild.cmd14. Start the WebLogic Servers – AdminServer, jms_server, dmkr_server, idm_server. If you forgot how to do this, see the previous section Step 10. Note: if you previously changed the startManagedWebLogic.cmd script for WLS_USER and WLS_PASS you will need to make those changes again. 15. Start the WebLogic server soa_server1: MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/startManagedWebLogic.cmd soa_server116. Open a browser to http://localhost:7001/console and login. 17. Navigate to Services?Data Sources and select DMKR_ASLINE. 18. Click the Targets tab. Check soa_server1, then click Save. Repeat for the DMKR_ADMIN data source. 19. Open a command prompt and navigate to ODEE_HOME/j2ee/weblogic/oracle11g/scripts, then execute deploy_soa.cmd. That's it! (As if that wasn't enough?) DOCUMAKER Deploy the sample MRL resources by navigating to/executing ODEE_HOME/documaker/mstrres/dmres/deploysamplemrl.bat. You should see approximately 500 resources deployed into the database. Start the Factory Services. Start?Run?services.msc. Locate the service named "ODDF xxxx" and right-click, select Start. Note that each Assembly Line has a separate Factory setup, including its own Factory service and Docupresentment service. The services are named for the assembly line and the machine on which they are installed (because you could have multiple machines servicing a single assembly line, so this allows for easy scripting to control all the services if you choose to do so. Repeat for the Docupresentment service. Note that each Assembly Line has a separate Docupresentment. Using Windows Explorer, navigate to ODEE_HOME/documaker/mstrres/dmres/input and select one of the XML files, and copy it into ODEE_HOME/documaker/hotdirectory. Note: if you chose a different hot directory during installation, copy the file there instead. Momentarily you should see the XML file disappear! Open browser and navigate to http://localhost:10001/DocumakerDashboard (previous versions 12.0-12.2 use http://localhost:10001/dashboard) and verify that job processed successfully. Note that some transactions may fail if you do not have a properly configured email server, and this is ok. You can set up a simple SMTP server (just search the internet for "SMTP developer" and you'll get several to choose from.  So... that's it? Where are we at this point? You now have a completely functional ODEE installation, from soup to nuts as they say. You can further expand your installation by doing some of the following activities: clustering WebLogic services configuring WebLogic for redundancy configuring Oracle 11g for RAC adding additional Factory servers for redundancy/processing capacity setting up a real MRL (instead of the sample resources) testing Documaker Web Services for job submission and more!  I certainly hope you've enjoyed this and find it useful. If you find yourself running into trouble, visit the Oracle Community for Documaker - there is plenty of activity there and you can ask questions. For more concentrated assistance, you can engage an Oracle consultant who is a subject matter expert to assist you. Feel free to email me [andy (dot) little (at) oracle (dot) com] and I can connect you with the appropriate resource to get started. Best of luck! -Andy 

    Read the article

  • Sync custom AD properties to SharePoint Profile

    - by KunaalKapoor
    Here are some step-by-step instructions regarding configuring SharePoint to sync with custom AD attributes:Add the custom attribute in Active DirectoryThis part will have to be your doing; here is some documentation regarding creating customattributes in AD:http://msdn.microsoft.com/en-us/library/ms675085(VS.85).aspxhttp://technet.microsoft.com/en-us/magazine/2008.05.schema.aspxhttp://blogs.technet.com/b/isingh/archive/2007/02/18/adding-custom-attributes-in-active-directory.aspx2. Open up the miisclient.exe (C:\Program Files\Microsoft Office Servers\14.0\Synchronization Service\UIShell\miisclient.exe)a. This will have to be opened up with the farm admin account3. Click on "Management Agents" in the ribbon4. Right-click the Active Directory Management Agent ("MOSS-<name of sync connection>") and click "Refresh Schema"a. When prompted, enter the credentials for the farm account5. Once complete, close out of miisclient.exe6. Go into Central Admin --> Application Management --> Manage Service Applications --> Go into the User Profile Service Application7. Click on "Manage User Properties"8. Click on "New Property"9. Put in the correct information regarding the attribute that was created10. At the bottom of this page, under the "Source Data Connection" drop down, select the AD synchronization connection you have already configured11. For the "Attribute" drop down, select the new attribute you have created12. For the "Direction" drop down, select "Import"13. Click "OK"14. Run a full synchronization for the user profile service application and the custom property will get synced (as long as the attribute is set in Active Directory for the desired users)

    Read the article

  • Installing a VADTools design component into your 3CX Voice Application Designer toolbox

    - by ParadigmShift
    The 3CX Voice Application Designer is an innovative tool for creating IVR (Interactive Voice Response) Applications, or Voice Applications.  It is a familiar drag-and-drop experience that Visual Studio developers will get the hang of pretty quick. Additionally, there are new 3rd party components released by BlueVoice, that are distributed though www.UtahVoIPStore.com I thought I’d post a quick introduction to it, by showing how to install a component into you designer tool box.  In this example I am using the CommandLine component, which lets you call the command line from your voice application. First, copy the ZIP file that came with your component to the root folder of your VAD project. Now extract the zip file into the root directory. The component will be in the root directory and the Libraries directory will have a new DLL file. Open your VAD project and right-click on the project in project explorer to add the new component to your project. Navigate to the root folder of your project and select the new component. The component is now ready for you to use in your toolbox.

    Read the article

  • Getit serves up local search in India with Java ME tech

    - by hinkmond
    Did you ever wonder where to get a good lamb vindaloo while you are visiting in Mumbai? Well, you need to get Getit then. See: Getit gets it on Java ME Here's a quote: Getit, the company which provides local search facility and free classifieds services in India, has announced the official release of the Getit Local Search Mobile app for Indian users. The app can be downloaded from the Mobango app store, ... [and]... is available for all platforms like [blah-blah-blah], [yadda-yadda-yadda], Java, Blackberry, Symbian etc... Getit gets it because they ported to the Java ME platform, the most ubiquitous mobile platform out there, and because they know when you want to find a good vindaloo, you want to find a good vindaloo! Hinkmond

    Read the article

  • How do I create /Groups/ folder in Mac OS X

    - by fettereddingoskidney
    I am familiar with adding Groups with the GUI in MAC OS X, but I am trying to do it via SSH to a computer I remotely manage as a production server. I want to create / modify some of my users for a particular directory by creating a new group. In Another helpful serverfault post, I see that I need to add the users to the group name at /Groups/foo, however my system's Groups folder does not exist... Does Mac OS X create the Groups directory only when you actually create the group – if there do not already exist any groups on the Machine? Is this something that I can do simply using: mkdir "Groups" Or maybe I'm wrong altogether. Any pointers for how to go about this with Unix? – I should note also that this group will be used to manage the access to a directory on my server via an .htaccess file. Thanks!

    Read the article

  • Remove CGI from IIS7

    - by jekcom
    I ran some security scan and the scan said that all kind of CGI stuff are potential thread. This is part of the result : (ash) is present in the cgi-bin directory (bash) is present in the cgi-bin directory By exploiting this vulnerability, a malicious user may be able to execute arbitrary commands on a remote system. In some cases, the hacker may be able to gain root level access to the system, in which case the hacker might be able to cause copious damage to the system, or use the system as a jumping off point to target other systems on the network for intrusion and/or denial of service attacks. and many more related to cgi-bin directory. First I searched all the server for cgi-bin folder and it did not find any. Second I'm running my website on pure .NET and I don't use any scripts like php. Question is how can I remove this CGI thing from the IIS?

    Read the article

  • How can I remove UNC password from a file

    - by freddoo
    Hi we have an mp4 file on our web server in a virtual directory When we try to access the file we get prompted for a username/password. When I tried to change the path of the virtual directory I got the message 'The following child nodes also define the value of the "UNCPassword" property, which overrides the value you have just set ...' which included the mp4 file that we try to access. How can I remove the UNC Password securing the file? The file is not on a shared drive its on the same drive as the web site root. The funny thing is the path of the virtual directory is not a UNC path it's a full path on the same server d:.....

    Read the article

  • Ubuntu One: devices is missed, but still synching

    - by Hardkorova
    I'm use Ubuntu One on MacOS and Ubuntu. In the list of devices on login.ubuntu.com/+applications or one.ubuntu.com/account I see only Web login. In the Ubuntu One's GUI app on Mac and Ubuntu I see that: "Local device" (without name of, or everything) as current device and Web login in the list of other devices. But my both computers is still synching, even after i change password! And I can't delete devices from app, because it generate error "AttributeError "'QGroupBox' object has no attribute 'startswith'"". You can see screenshot: http://i40.tinypic.com/21c8tx3.png I think, I need to delete all login info on both machines for re-login to cloud, but cleaning up folders like "ubuntuone" and "sso" on Ubuntu in /home/user/.cache, .config and on MacOS in "Libraries" is not working - app being still log-in. Because of it sometimes synchronization working not properly - I need to recheck sync folders for syncing changes on it.

    Read the article

  • Couldn't find package - But package is listed in the Packages file

    - by Chris
    (Quoted items are redacted elements) I am using a private repository and an currently trying to repackage some packages 3rd-party packages. I extract the package, make a few modifications (just the control files to fit with company policy - though sometimes file install locations though not in this case) and repackage (and usually rename). Normally I copy the files into a new blank debhelper project and reconstruct the package, however, with a recent one I attempting to convert and some libraries and stuff aren't linking properly (I did copy the postinst, postrm, and preinst files along with all DEDIAN files exactly), the original package worked, but my repackage doesn't, despite providing the same files in the same locations and the same postinst and preinst. So I was attempting to just modify the current packages control files (as the original package is not very good and will not list in our repository and getting a better one from the 3rd party is not an option). I also renamed the package. I did the following: dpkg-deb -R "directory" Modify DEBIAN/control dpkg-deb -b "directory" "package name I want" I did this and put it in our repository. The package shows up in the "Packages" file on the repository and running apt-get update on the client side shows the package in: /var/lib/apt/lists/"server"_"location"_Packages However when I do an apt-get install on the package name (as listed in the Packages file - I did a copy paste) it says it can't find the package. Same with an apt-cache search The Packages listings is as follow (name redacted): Package: "package name" Priority: extra Section: unknown Maintainer: "maintainer" Architecture: any Version: 1.0-lucid5 Depends: libc Filename: "directory"/"package_filename" Size: 2206292 MD5sum: "md5sum" SHA1: "sha key" SHA256: "sha256 key" Description: "description" I am running as sudo (and tried as root as well). I don't understand why apt-get won't see the package. Can you point out any flaws in what I have done, or perhaps some help on getting apt-get to properly see the package. Or perhaps an alternative. I am not even sure if this is a valid way to repackage something. Thanks.

    Read the article

  • Weblogic domain scale up using EM Grid Control 11gR1

    - by dmitry.nefedkin(at)oracle.com
    As you know a weblogic domain consists of set of servers running independently or in a cluster mode, sharing the distributed resources. And in most environments weblogic  cluster consists of multiple managed servers running simultaneously and working together to provide increased scalability and reliability.  These servers can run on the same machine, or be located on different machines.  It's a common task to increase a cluster's capacity by adding new machines to the cluster to host the new server instances.  You can do it by manually installing weblogic binaries to the new host and use pack/unpack commands to add a managed server to this new host.  But with Enterprise Manager Grid Control 11gR1 (EMGC) there is  another way - Fusion Middleware Domain Scale Up  procedure. I'm going to show you how it works.Here is a picture of  my medrec_oradb weblogic domain, what is registered in EMGC. It contains an admin server and a cluster MedRecCluster with  the single managed server MS1. Both admin and managed servers are on the same host oel46-vmware, it's a virtual machine with OEL 4.6 that runs inside our Oracle VM infrastructure.  And here are the application deployments, note that couple of applications are deployed to the cluster.First of all I have to prepare a new machine that will host new managed sever of my cluster. I created new VM with OEL 5.4 using the corresponding Oracle VM template available in Oracle E-Delivery site for Oracle Linux and Oracle VM and named it wls1032. Next step is to install Oracle EM Grid Control 11gR1 Agent to this new host.  You can download it from the OTN page and install it manually,  or you can use Agent Installation Deployment procedure available in EMGC  (Deployments->Agent Installation->Install Agent). Anyway, when you agent is up and running on the new machine, you will see it in EMGC Console in the Targets->Hosts subtab.Now we are ready to scale up our weblogic domain. Click the Deployments tab in Oracle Enterprise Manager Grid Control, and then click Deployment Procedure. Select a Fusion Middleware Domain Scale Up procedure from the list, and click Schedule Deployment. The first page of the FMW Domain Scale Up Wizard is displayed and you can proceed with the deployment process.Select the domain from list, enter the working directory on the admin server host, and also fill the weblogic credentials for the administration server console and the OS credentials for the  admin server host.  Click Next button.  The next step allows you to configure you domain, to add a new manager server to the cluster you should select the cluster in the tree and click Add Server button. Select the newly added server in a tree, choose the target host and  enter the configuration details of your managed server. You can also add new machine and node manager details.  Please note that you cannot change the values in  Domain Location and Fusion Middleware Home fields, so these locations on the target host will be the same as for the admin server host.   Working directory on the target host should have enough free space to store FMW home binaries and domain configuration files.  In my experience the working directories should have at least 3 Gb of free space.  The last thing you should fill is the OS credentials for the target host. The next steps allows you to schedule the execution of the procedure, it is started immediately in my example. The last step is just a review the configuration for the domain scale up. Click Submit to launch the process. You can track the status of the procedure execution by selecting Deployments->Deployment Procedures->Procedure Completion Status in the EMGC Console.As you can see in the picture below, the procedure consists of the many steps, and I'm going to share my experience about the issues that I had at some of the steps. Please keep in mind that you can always continue the execution from the last successfully completed step by clicking Retry button.Check OUI Prerequisites  step may fail if the target host does  not pass prerequisites checks for Weblogic Server installation such as amount of RAM, linux packages installed, etc. Create FMW Clone Archive step may fail if you do not have enough free space in the working directory on the administration server host.Transfer cloning archive to targets  step  may fail if the EMGC agents on the admin server host or on target host are not secured.   You should secure the agent by issuing ./emctl secure agent  command from $AGENT_HOME/bin directory and entering the agent registration password.Both Transfer cloning archive to targets and Apply Clone at target hosts steps may fail if you do not have enough free space in the working directory on the target host. The most complicated issue I had on the Run Inventory Collection  step. The step failed and I noticed that the agent on the target server is also failed with the following error in the $AGENT_HOME/sysman/log/emagent.trc  log file:2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: Failed to upload file A0000008.xml: Fatal Error.Response received: 500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-2838952848 WARN  upload: FxferSend: received fatal error in header from repository: https://oel46-vmware:1159/em/uploadFATAL_ERROR::500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: number of fatal error exceeds the limit 32010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: agent will shutdown now2010-12-28 11:50:35,552 Thread-2838952848 ERROR : Signalled to Exit with status 55. Too many fatal upload failures2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-3044607680 ERROR main: EMAgent abnormal terminatingI checked the timezone of my domain target inside EMGC repositoryselect timezone_regionfrom mgmt_targets where target_type = 'weblogic_domain'  and display_name = 'medrec_oradb'"TIMEZONE_REGION""America/Los_Angeles"Then checked the timezone of my agents and indeed, they differedselect target_name, timezone_region from mgmt_targets where type_display_name = 'Agent'"TARGET_NAME"    "TIMEZONE_REGION""oel46-vmware:3872"    "America/Los_Angeles""wls1032.imc.fors.ru:3872"    "America/New_York"So I had to change the timezone on the wls1032 host and propagate this changes to the agent and to the EMGC repository. Here was the steps:issued system-config-date command on wls1032.imc.fors.ru  and set timezone to "America/Los_Angeles"propagated the changes to the agent bu executing ./emctl resetTZ agent  command from $AGENT_HOME/bin directoryconnected to EMGC repository as sysman and executed the following PL/SQL block:   begin      mgmt_target.set_agent_tzrgn('wls1032.imc.fors.ru:3872','America/Los_Angeles');      commit;   end;After that I had to clear the pending uploads on wls1032.imc.fors.ru:  rm -r $AGENT_HOME/sysman/emd/state/*  rm -r $AGENT_HOME/sysman/emd/collection/*  rm -r $AGENT_HOME/sysman/emd/upload/*  rm $AGENT_HOME/sysman/emd/lastupld.xml  rm $AGENT_HOME/sysman/emd/agntstmp.txt  $AGENT_HOME/bin/emctl start agent  $AGENT_HOME/bin/emctl clearstate agentThe last part of this solution was to resync the agent in EMGC console by clicking Agent Resynchronization button (please leave "Unblock agent on successful completion of agent resynchronization" checkbox checked in the next screen).After that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 host,  and my previous error disappeared,  but I catched another one: EMD upload error: Failed to upload file A0000004.xml: HTTP error.Response received: ERROR-400|Data will be rejected for upload from agent 'https://wls1032.imc.fors.ru:3872/emd/main/', max size limit for direct load exceeded [7544731/5242880]So the uploading XML file size was 7 Mb, and the limit on OMS was 5 Mb.  To increase the max file size limit to 20 Mb I had to connect to the OMS host and execute the following commands from $OMS_HOME/bin directory: ./emctl set property -name em.loader.maxDirectLoadFileSz -value 20971520 -module emoms ./emctl stop oms ./emctl start omsAfter that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 one more time and it completed successfully.   The agent uploaded the configuration information to the EMGC  repository and I was able to see the results of my weblogic domain scale-up in EMGC Console.DeploymentsSo, now the weblogic cluster contains 2 managed servers located on the different hosts. This powerful feature of the Enterprise Manager Grid Control  is a part of  the WebLogic Server Management Pack Enterprise Edition.

    Read the article

  • Getting apache to use ldap group and filesystem group information

    - by Angelo
    We have an Apache server which serves out of a particular directory, and just supplies a listing of files. From this directory, each subdirectory is owned by a certain group of users (at the filesystem level). User groups are determined by a posixGroup in ldap. Is there any simple way I can tell Apache to authorize access based on filesystem permissions, just like if the users were to access the filesystem from a shell? I would like to be able to simply add users/groups/directories without having to add another Directory or Location directive in Apache's conf?

    Read the article

  • OpenSSH 5.9p1 on Ubuntu 11.10

    - by Michal Burak
    I want to build a deb package with the latest version of openssh from source. Then I want to install it on my machine. I am running: Linux Ubuntu-1110-oneiric-64-minimal 3.0.0-12-server #20-Ubuntu SMP Fri Oct 7 16:36:30 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux To achieve my goal I do: wget ftp://ftp.openbsd.com/pub/OpenBSD/OpenSSH/portable/openssh-5.9p1.tar.gz wget ftp://ftp.openbsd.com/pub/OpenBSD/OpenSSH/portable/openssh-5.9p1.tar.gz.asc gpg openssh-5.9p1.tar.gz.asc apt-get build-dep openssh-server openssh-client apt-get source openssh-server cd openssh-5.8p1/ uupdate -v 5.9p1 /root/packages/openssh/openssh-5.9p1.tar.gz cd ../openssh-5.9p1 dpkg-buildpackage -us -uc -nc But I get an error: make[1]: Entering directory `/root/packages/openssh/openssh-5.9p1' rm -f debian/tmp/etc/ssh/sshd_config dh_install -Nopenssh-client-udeb -Nopenssh-server-udeb --fail-missing cp: cannot stat `debian/tmp/usr/bin/ssh-vulnkey': No such file or directory dh_install: cp -a debian/tmp/usr/bin/ssh-vulnkey debian/openssh-client//usr/bin/ returned exit code 1 make[1]: *** [override_dh_install] Error 2 make[1]: Leaving directory `/root/packages/openssh/openssh-5.9p1' make: *** [binary] Error 2 dpkg-buildpackage: error: debian/rules binary gave error exit status 2 Any ideas what do I do to make this work?

    Read the article

  • Setting up VSFTPD on AWS EC2 Instance

    - by Robert Ling III
    I'm trying to set up VSFTPD passive hosting on my EC2 instance. I ran through these instructions http://www.synergycode.com/knowledgebase/blog/item/ftp-server-on-amazon-ec2 . However, when I tried to connect in FileZilla, I got Command: CWD /home/lingiii/ftp Response: 250 Directory successfully changed. Command: TYPE I Response: 200 Switching to Binary mode Command: PASV Response: 227 Entering Passive Mode (10,222,206,33,54,184). Status: Server sent passive reply with unroutable address. Using server address instead. Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing Where directory /home/lingiii/ftp is set to wrx permissions for user lingiii, group developers (of which lingiii is a member) AND I'm logging in as user lingiii. Any advice?

    Read the article

  • Using LDAP to store customer data

    - by mechcow
    We wish to store some data in 389 Directory Server LDAP that doesn't fit that well into the standard set of schema's that come with the product. Nothing too amazing, things like: when the customer joined are they currently active customer certificate[1] which environment they are using My question is this: should we register with OID and start writing up our own custom schema OR is there a standard schema definition not provided by Directory Server that we can download and use that would fit our needs? Should we munge/hack existing attributes and store the data among there (I'm strongly opposed to this, but would be interested in arguments about why its better than extending)? [1] I know there is a field for this userCertificate but we don't want to use it to authenticate the user for the purposes of binding Using CentOS 5.5 with 389 Directory Server 8.1

    Read the article

  • How restore qmail backup files

    - by Maysam
    We are using qmail as our mail application on a linux server. A few weeks ago our server crashed and we had everything installed from scratch and our users started to send & receive email again. The problem is they have lost their old emails. We have a back up of the whole qmail directory. But I don't know how to restore the old emails without losing the new ones. It's worth mentioning that I don't have any problem with restoring old sent mails. When I copy email files into .sent-mail/cur directory, I have them restored in sent box of users, but restoring files in /cur directory doesn't work for inbox emails and I can't get them restored.

    Read the article

  • Outlook folder structure Template

    - by Filip Ekberg
    Having a lot of different customers and a lot of different areas to work with makes it trivial to have your mail folders in order. Everytime I get a new Project / Customer I want to add a certain Folder Structure in my "Customer" / "Project" sub directory. It might look like this: Customer_name/ Bugs Documents Important Support/ Done And as it is today, I have to manually add these manually, which is harsh when you have a lot of it going on and each sub directory under the customer_name directory needs to have "display all items" since it's important to me to see all Items in Bugs / Support / Important. Makes my life easier. So, Is it possible to Automize the process somehow? Macro? Folder Templates? What are my options?

    Read the article

  • How to make Windows 7 use a different name for the Program Files folder?

    - by Renato Silva
    How can I properly rename the Program Files folder in Windows 7? That is, not simply rename the directory and create symlinks, but make Windows itself see the location of installed programs as something else. I have already renamed the directory to Programs (using desktop.ini for a localized name in explorer), and made Program Files into a symlink to that, but I wonder if it's ever possible to erase the symlink by configuring Windows to, under all circumstances, assume Programs as the actual name for the programs directory. I heard that it should be possible to choose the name from Windows installation, but I'm not sure. Besides, I don't want to reinstall Windows from scratch. Before you mention %ProgramFiles%, no, it doesn't work. I'm not sure if Windows has that location hard-coded, or if replacing all occurrences in the registry would be enough.

    Read the article

  • How do I build and install the gspca webcam driver?

    - by sam
    I tried to install gspca to run Orite webcam on Ubuntu 12.04 64-bit, but I failed. It lost a lot of headers, here are my instructions but failed. wget http://mxhaard.free.fr/spca50x/Download/gspcav1-20071224.tar.gz tar zxvf gspcav1-20071224.tar.gz cd gspcav1-20071224/ sudo ./gspca_build sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/linux/config.h sudo mkdir /usr/src/linux-headers-3.2.0-25-generic/include/asm sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/asm/semaphore.h sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/linux/videodev.h sudo touch /usr/src/linux-headers-3.2.0-25-generic/include/linux/smp_lock.h How to solve it? I move to /usr/src and make: sam@sam:/usr/src/gspcav1-20071224$ sudo make make -C /lib/modules/`uname -r`/build SUBDIRS=/usr/src/gspcav1-20071224 CC=cc modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-25-generic' CC [M] /usr/src/gspcav1-20071224/gspca_core.o /usr/src/gspcav1-20071224/gspca_core.c:37:26: fatal error: linux/config.h: No such file or directory compilation terminated. make[2]: *** [/usr/src/gspcav1-20071224/gspca_core.o] Error 1 make[1]: *** [_module_/usr/src/gspcav1-20071224] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-25-generic' make: *** [default] Error 2 sam@sam:/usr/src/gspcav1-20071224$

    Read the article

  • How can I automate FTP downloads based on date without bi-directional syncing?

    - by Bill
    I have a particular FTP-related situation that I'm having trouble finding a solution for. I need an FTP download/syncing application that can operate within the following parameters: It must run under Windows (installing Python to be able to run a script or some such thing is an acceptable solution). It must be able to ignore files before a certain date (I want to start downloading new files, not all the files that exist in this very large FTP directory). I don't want bi-directional syncing (e.g. I don't want changes I make to the local files and directory structure to change the remote FTP server, the FTP server needs to be left completely alone). Automating it in some fashion would be ideal. What would you guys suggest? The solutions I'm turning up are all missing the mark in some fashion (e.g. they have bi-directional syncing or they have no way of starting the syncing today instead of trying to pull down the entire directory).

    Read the article

  • Web standards or risk avoidance?

    - by Junior Dev
    My company is building an App Engine application. The app encounters a bug (possibly due to an issue with App Engine itself, as per our research) on IE9, but it cannot be reliably reproduced and is experienced by a small percentage of users. The workaround is to force IE9 to use IE8 mode. As a lazy front end developer (who doesn't like CSS hacks, shims and polyfills) I think it's OK to at least try going back to IE9 mode and see what happens, while we're still in private beta. The senior engineer (being more pragmatic) would rather that we continue forcing IE9 users to use the older IE8 mode. Who is right?

    Read the article

  • zero-config CGI enabled web server

    - by halp
    To serve static content of a directory over http, one can simply navigate to that directory and type: python -m SimpleHTTPServer 11111 which will start a http server on port 11111. This hack is nice because it requires zero-config: no stand-alone web server, no config files at all. Is it possible to extend this example, or have an alternate way to achieve this goal, but also have CGI support? The final goal is to have a quick and lazy way of serving a web site from a certain directory. The site has static content (HTML pages, images), but also a CGI script. The CGI script must work properly when accessed via browser. Of course I could setup a virtual host in apache, allow CGI inside it etc. But that's not a zero-config approach.

    Read the article

  • Error when make "make install" PHP WebDav

    - by kron
    Hi, I'm having issues install PHP WebDAV onto Fedora8 - after downloading and running make install I get the following errors: [root@ip-18-192-114-35 dav]# make install /bin/sh /tmp/dav/libtool --mode=compile gcc -I. -I/tmp/dav -DPHP_ATOM_INC -I/tmp/dav/include -I/tmp/dav/main -I/tmp/dav -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /tmp/dav/dav.c -o dav.lo gcc -I. -I/tmp/dav -DPHP_ATOM_INC -I/tmp/dav/include -I/tmp/dav/main -I/tmp/dav -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /tmp/dav/dav.c -fPIC -DPIC -o .libs/dav.o /tmp/dav/dav.c:21:23: error: ne_socket.h: No such file or directory /tmp/dav/dav.c:22:24: error: ne_session.h: No such file or directory /tmp/dav/dav.c:23:22: error: ne_utils.h: No such file or directory /tmp/dav/dav.c:24:21: error: ne_auth.h: No such file or directory /tmp/dav/dav.c:25:22: error: ne_basic.h: No such file or directory /tmp/dav/dav.c:26:20: error: ne_207.h: No such file or directory /tmp/dav/dav.c:35: error: expected specifier-qualifier-list before 'ne_session' /tmp/dav/dav.c: In function 'dav_destructor_dav_session': /tmp/dav/dav.c:152: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:153: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:155: error: 'DavSession' has no member named 'base_uri_path' /tmp/dav/dav.c:156: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:157: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c:158: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c: In function 'cb_dav_auth': /tmp/dav/dav.c:194: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:194: error: 'NE_ABUFSIZ' undeclared (first use in this function) /tmp/dav/dav.c:194: error: (Each undeclared identifier is reported only once /tmp/dav/dav.c:194: error: for each function it appears in.) /tmp/dav/dav.c:195: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c: In function 'zif_webdav_connect': /tmp/dav/dav.c:212: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:212: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:213: error: 'ne_uri' undeclared (first use in this function) /tmp/dav/dav.c:213: error: expected ';' before 'uri' /tmp/dav/dav.c:215: error: 'uri' undeclared (first use in this function) /tmp/dav/dav.c:259: error: 'DavSession' has no member named 'base_uri_path' /tmp/dav/dav.c:260: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:262: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:264: error: 'DavSession' has no member named 'user_name' /tmp/dav/dav.c:267: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c:269: error: 'DavSession' has no member named 'user_password' /tmp/dav/dav.c:271: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c: In function 'get_full_uri': /tmp/dav/dav.c:304: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:307: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:313: error: 'DavSession' has no member named 'base_uri_path' /tmp/dav/dav.c:313: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c:314: error: 'DavSession' has no member named 'base_uri_path_len' /tmp/dav/dav.c: In function 'zif_webdav_get': /tmp/dav/dav.c:329: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:329: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:330: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:330: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:348: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:354: error: 'ne_accept_2xx' undeclared (first use in this function) /tmp/dav/dav.c:359: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:359: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_put': /tmp/dav/dav.c:377: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:377: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:378: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:378: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:396: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:405: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:405: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_delete': /tmp/dav/dav.c:422: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:422: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:423: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:423: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:441: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:448: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:448: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_mkcol': /tmp/dav/dav.c:465: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:465: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:466: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:466: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:484: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:491: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:491: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_copy': /tmp/dav/dav.c:510: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:510: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:511: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:511: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:539: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:550: error: 'NE_DEPTH_INFINITE' undeclared (first use in this function) /tmp/dav/dav.c:550: error: 'NE_DEPTH_ZERO' undeclared (first use in this function) /tmp/dav/dav.c:554: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:554: error: invalid type argument of '->' /tmp/dav/dav.c: In function 'zif_webdav_move': /tmp/dav/dav.c:573: error: 'ne_session' undeclared (first use in this function) /tmp/dav/dav.c:573: error: 'sess' undeclared (first use in this function) /tmp/dav/dav.c:574: error: 'ne_request' undeclared (first use in this function) /tmp/dav/dav.c:574: error: 'req' undeclared (first use in this function) /tmp/dav/dav.c:598: error: 'DavSession' has no member named 'sess' /tmp/dav/dav.c:611: error: 'NE_OK' undeclared (first use in this function) /tmp/dav/dav.c:611: error: invalid type argument of '->' make: *** [dav.lo] Error 1 Any help would be much appreciated. Thanks!

    Read the article

< Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >