Search Results

Search found 19871 results on 795 pages for 'commit log'.

Page 241/795 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Secure ldap problem

    - by neverland
    I have tried to config my openldap to have secure connection by using openssl on Debian5. By the way, I got trouble during the below command. ldap:/etc/ldap# slapd -h 'ldap:// ldaps://' -d1 >>> slap_listener(ldaps://) connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_read(15): unable to get TLS client DN, error=49 id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 ber_get_next ber_get_next on fd 15 failed errno=0 (Success) connection_closing: readying conn=7 sd=15 for close connection_close: conn=7 sd=15 Then I have search for "unable to get TLS client DN, error=49 id=7" but it seems no where has a good solution to this yet. Please help. Thanks # Well, I try to fix something to get it work but now I got this ldap:~# slapd -d 256 -f /etc/openldap/slapd.conf @(#) $OpenLDAP: slapd 2.4.11 (Nov 26 2009 09:17:06) $ root@SD6-Casa:/tmp/buildd/openldap-2.4.11/debian/build/servers/slapd could not stat config file "/etc/openldap/slapd.conf": No such file or directory (2) slapd stopped. connections_destroy: nothing to destroy. What should I do now? log : ldap:~# /etc/init.d/slapd start Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldaps:///' -g openldap -u openldap -f /etc/ldap/slapd.conf ldap:~# tail /var/log/messages Feb 8 16:53:27 ldap kernel: [ 123.582757] intel8x0_measure_ac97_clock: measured 57614 usecs Feb 8 16:53:27 ldap kernel: [ 123.582801] intel8x0: measured clock 172041 rejected Feb 8 16:53:27 ldap kernel: [ 123.582825] intel8x0: clocking to 48000 Feb 8 16:53:27 ldap kernel: [ 131.469687] Adding 240932k swap on /dev/hda5. Priority:-1 extents:1 across:240932k Feb 8 16:53:27 ldap kernel: [ 133.432131] EXT3 FS on hda1, internal journal Feb 8 16:53:27 ldap kernel: [ 135.478218] loop: module loaded Feb 8 16:53:27 ldap kernel: [ 141.348104] eth0: link up, 100Mbps, full-duplex Feb 8 16:53:27 ldap rsyslogd: [origin software="rsyslogd" swVersion="3.18.6" x-pid="1705" x-info="http://www.rsyslog.com"] restart Feb 8 16:53:34 ldap kernel: [ 159.217171] NET: Registered protocol family 10 Feb 8 16:53:34 ldap kernel: [ 159.220083] lo: Disabled Privacy Extensions

    Read the article

  • When Do OS Questions Belong on Hardware Service Requests?

    - by Get Proactive Customer Adoption Team
    Untitled Document My Oracle Support—Logging an Operating System Service Request One of the concerns we hear from our customers with Premier Support for Systems is that they have difficulty logging a Service Request (SR) for an operating system issue. Because Premier Support for Systems includes support for the hardware and the associated operating system, you log any operating system issues through a hardware Service Request. To create a hardware Service Request, you enter the information into the Hardware tab of the Create Service Request screen, but to ensure that the hardware Service Request you enter is recognized and routed appropriately for an operating system issue, you need to change the product from your specific hardware to the operating system that the hardware is running. The example below shows you how to create a Service Request for the operating system when the support level is Premier Support for Systems. The key to success is remembering that the operating system coverage is part of the hardware support. To begin, from anywhere within My Oracle Support, click on the Create SR button as you would to log any SR: Enter your Problem Summary and the Problem Description Next, click on the Hardware tab. Enter the System Serial Number (in this case “12345”) and click on Validate Serial Number: Notice that the product name for the hardware indicates “Sunfire T2000 Server” with an option for a drop down List of Values. Click on the product drop down and choose the correct operating system from the list. In this case I have chosen “OpenSolaris Operating System” Next, you will need to enter the correct operating system version: At this point, you may proceed to complete and submit the Service Request. If your company has Premier Support for Systems, just remember that your operating system has coverage under the hardware it runs on, so start with a Hardware tab on the Service Request screen and change the product related information to reflect the operating system you need help with. Following these simple steps will ensure that the system assigns your Service Request to the right support team for an operating system issue and the support engineer can quickly begin working your issue.

    Read the article

  • Why i get Two value for ArrAffinity in my Cookie with Application Request Routing web servers setup

    - by Cédric Boivin
    Hello, I got a problem with ARR and my webfarm. I got a application develop, with a login page, when i log into my web application, i always log out. So I download fiddler to valid the affinity with my server and i see i got two value of key ArrAffinity in my cookie. Somme page got two value : ARRAffinity=2ea1e079a7e09ee9844bb1f5eca66f4f94432d3e832c073b80e0091fda6a54d4 ARRAffinity=d000ece875153770e561ea2d34d5ce85968d56e7a02104e726a25d445de25eed Other one got only have one ARRAffinity=d000ece875153770e561ea2d34d5ce85968d56e7a02104e726a25d445de25eed With this problem, i think my http request, is send to radom iis server on my farm, so the impact is i am disconnect. Anny idea ?

    Read the article

  • Question regarding the SELinux type enforcement file

    - by Luke Bibby
    In my SElinux te file, I define two new types called voice_t and data_t which certain directories will be classified in the fc file (/data/ will be of type data_t and /voice/ will be of type voice_t). I would like the one SELinux policy to be used for all servers in my network, but, some servers will log VoIP data and other servers will be used to log IP data. I only want the voice_t type to be defined on some servers and data_t to be defined on the others - is this possible? I have tried using an if statement with a boolean expression, and then defining the type when the condition is true but this does not seem to work (it tells me there is a syntax error at 'type data_t'' or 'type voice_t;'). Example: if (data_logger) { type data_t; } else { type voice_t; } Any help would be greatly appreciated. Cheers, Luke

    Read the article

  • ODI 11g – Scripting Repository Creation

    - by David Allan
    Here’s a quick post on how to create both master and work repositories in one simple dialog, its using the groovy capabilities in ODI 11g and the groovy swing builder components. So if you want more/less take the groovy script and change, its easy stuff. The groovy script odi_create_repos.groovy is here, just open it in ODI before connecting and you will be able to create both master and work repositories with ease – or check the groovy out and script your own automation – you can construct the master, work and runtime repositories, so if you are embedding ODI as your DI engine this may be very useful. When you click ‘Create Repository’ you will see the following in the log as the master repository starts to be created; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... Then the completion message followed by the work repository creation and final completion message. Master Repository Creation Completed. Work Repository Creation Started. Work Repository Creation Completed. ====================================================== Repository Creation Completed Successfully ====================================================== Script exited. If any error is hit, the script just exits and prints any error to the log. For example if I enter no passwords, I will get this error; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... ====================================================== Repository Creation Complete in Error ====================================================== oracle.odi.setup.RepositorySetupException: oracle.odi.core.security.PasswordPolicyNotMatchedException: ODI-10189: Password policy MinPasswordLength is not matched. ====================================================== Script exited. This is another example of using the ODI 11g SDK showing how to automate the construction of your data integration environment. The main interfaces and classes used here are IMasterRepositorySetup / MasterRepositorySetupImpl and IWorkRepositorySetup / WorkRepositorySetupImpl.

    Read the article

  • ODI 11g – Scripting Repository Creation

    - by David Allan
    Here’s a quick post on how to create both master and work repositories in one simple dialog, its using the groovy capabilities in ODI 11g and the groovy swing builder components. So if you want more/less take the groovy script and change, its easy stuff. The groovy script odi_create_repos.groovy is here, just open it in ODI before connecting and you will be able to create both master and work repositories with ease – or check the groovy out and script your own automation – you can construct the master, work and runtime repositories, so if you are embedding ODI as your DI engine this may be very useful. When you click ‘Create Repository’ you will see the following in the log as the master repository starts to be created; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... Then the completion message followed by the work repository creation and final completion message. Master Repository Creation Completed. Work Repository Creation Started. Work Repository Creation Completed. ====================================================== Repository Creation Completed Successfully ====================================================== Script exited. If any error is hit, the script just exits and prints any error to the log. For example if I enter no passwords, I will get this error; ====================================================== Repository Creation Started.... ====================================================== Master Repository Creation Started.... ====================================================== Repository Creation Complete in Error ====================================================== oracle.odi.setup.RepositorySetupException: oracle.odi.core.security.PasswordPolicyNotMatchedException: ODI-10189: Password policy MinPasswordLength is not matched. ====================================================== Script exited. This is another example of using the ODI 11g SDK showing how to automate the construction of your data integration environment. The main interfaces and classes used here are IMasterRepositorySetup / MasterRepositorySetupImpl and IWorkRepositorySetup / WorkRepositorySetupImpl.

    Read the article

  • Troubleshooting Amazon EC2 reboot

    - by tgm
    We've had a server (CentOS) running in EC2 for a few months. It had been going pretty smoothly until today when we got an alarm that the server was unavailable (HTTP service couldn't be reached). So I tried SSHing into the box but that timed out as well. I logged into the EC2 console and it said the instance was running and there wasn't anything in the system log. One odd thing I noticed is that even though we have an Elastic IP attached to it (which shows in the Elastic IP management area), the instance detail is not showing that there is an EIP associated with the instance. I looked through the message log and the last thing I see around the time we got our alert was the dhclient renewed the lease. I'm guessing there may have been some sort of issue with the networking. How might I check if that was the problem, or if there were any other issues that may have caused our instance to stop responding?

    Read the article

  • Upgrade to 2008 R2

    - by DavidWimbush
    I don't like it, Carruthers. It's just too quiet. Well, I've done the pre-production server, the main live server and the Reporting/BI server with remarkably little trouble. Pre-production and live were rebuilds. I failed live over to our log shipping standby for the duration, which has a gotcha I blogged about before. When I failed back to the primary live server again, it was very quick to bring the databases online. I understand the databases don't actually get upgraded until you recover them but there was no noticable delay. It's gone from 2005 Workgroup - limited to 4GB of memory - to 2008 R2 Standard so it can now use nearly all of the 30GB in the server. It's soo much faster. The reporting/BI server I upgraded in situ. This took a while but, again, went smoothly. Just watch out, because the master database was left at compatibility level 90. Also the upgrade decided to use the reporting service's credentials for database access when running reports. It didn't preserve the existing credentials and I had to go into the Reporting Configuration Manager to put them back in. Make sure you know what credentials your server is using before you upgrade. All things considered, a fairly painless experience. Now I just have to upgrade and reset our log shipping standby server again!

    Read the article

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • GlassFish will not start when SNMP is enabled

    - by edarc
    I have a GlassFish v3 app server running on 64-bit Debian Lenny. Everything is running fine, except I would like to monitor GF's JVM instance with SNMP. However, every time I try to enable it by adding the following <jvm-options> in domain.xml: -Dcom.sun.management.snmp.port=10161 -Dcom.sun.management.snmp.acl.file=/path/to/snmp.acl -Dcom.sun.management.snmp.interface=127.0.0.1 GlassFish refuses to start: $ asadmin start-domain Waiting for DAS to start .Error starting domain: default. The server exited prematurely with exit code 1. Command start-domain failed. $ There is also nothing illuminating (well, really nothing at all) in jvm.log or server.log. The snmp.acl file contains: acl = { { communities = public access = read-only managers = localhost } } and is chmod 600 (I know this is not the problem because it will actually fail with an error about the permissions if it is set to anything other than 600) $ java -version java version "1.6.0_0" OpenJDK Runtime Environment (build 1.6.0_0-b11) OpenJDK 64-Bit Server VM (build 1.6.0_0-b11, mixed mode)

    Read the article

  • How to get nginx to serve up on an elastic IP

    - by geekbri
    I have an EC2 instance which is serving up PHP pages with nginx and php-fpm. This works perfectly fine when accessed through the public DNS for the instance. However if I try to access the site with the Elastic IP which is bound to it, it serves up a generic "Welcome to nginx" page, even though in my server block i have listen 80 (which i thought listened on all incoming IPs on port 80). Here is my nginx config. server { listen 80; access_log /var/log/nginx/access.log; root "/var/www/clipperz/"; index index.html index.php; # Default location location / { try_files $uri $uri/ index.html; } # Parse all .php file in the $document_root directory location ~ .php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

    Read the article

  • Tune SQL Server Express using Profiler?

    - by Glen Little
    I have a SQL Server 2005 database... a copy of it is running in development on a full version of SQL server. Another copy is running in SQL Server 2005 Express on a web server. I've used SQL Profiler and saved a Tuning trace log from activity on the SQL Express copy of the database. I want to use the saved trace log in the Database Engine Tuning Advisor... If I try when connecting the Advisor to the Express database, I am told that Express is not supported. If I try when connecting the Advisor to the SQL Server database, I get empty results. Is there any way to do this?

    Read the article

  • Install IIS windows 7

    - by rad
    I have tried many way to install IIS in my Windows 7 Professionnel environment. And I have always an error. I tried with Web Plateform Api - "Fatal Error during installation" With Windows Features in configuration pannel - "An error has occurred. Not all of the features were successfully" With command line (http://forums.iis.net/t/1152513.aspx) - "Fatal Error during installation" in log file I have IIS.log : [04/10/2012 16:13:25] "C:\Windows\SysWOW64\inetsrv\aspnetca.exe" /install /basic 2.0.50727.0 [04/10/2012 16:13:25] < !!FAIL!! RegOpenKeyEx(HKLM\SYSTEM\CurrentControlSet\Services\ASP.NET_2.0.50727\Names) result=0x80070002 [04/10/2012 16:13:25] < !!FAIL!! SetAccessToPerfCountersKeys() result=0x80070002 [04/10/2012 16:13:25] < !!FAIL!! Installation failure, result=0x80070002 If I uninstall/reinstall IIS, I have some error but my site run correctly, and after windows restart, the install is rollbacked and I lose my installation. Any help ?

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • Requiring SSH-key Login Via PAM From Specific IP Ranges

    - by Sean M
    I need to be able to access my server (Ubuntu 8.04 LTS) from remote sites, but I'd like to worry a bit less about password complexity. Thus, I'd like to require that SSH keys be used for login instead of name/password. However, I still have a lot to learn about security, and having already badly broken a test box when I was trying to set this up, I'm acutely aware of the chance of screwing myself while trying to accomplish this. So I have a second goal: I'd like to require that certain IP ranges (e.g. 10.0.0.0/8) may log in with name/password, but everyone else must use an SSH key to log in. How can I satisfy both of these goals? There already exists a very similar question here, but I can't quite figure out how to get to what I want from that information. Current tactic: reading through the PAM documentation (pam_access looks promising) and looking at /etc/ssh/sshd_config.

    Read the article

  • Cannot install .NET Framework 4.0 on Windows XP SP3

    - by Bob
    I'm using Windows XP SP3 logged in as the administrator. I had RAID Mirroring running. The motherboard broke earlier in the year. When I got a new battery I did not resync. I just use the disks as two separate disks. I searched Google for the errors but I didn't find anything detailed enough. The following Microsoft components are in Add/Remove programs: .NET Framework 1.1 .NET Framework 2.0 Service Pack 2 .NET Framework 3.0 Service Pack 2 .NET Framework 3.5 SP1 Microsoft Compression Client Pack 1.0 for Windows XP Microsoft Office Enterprise 2007 Microsoft Silverlight Microsoft USB Flash Driver Manager Micrsoft User-mode Driver Framework Feature Pack 1.0 Microsoft Visual C++ 2005 ATL Update KB 973923 x86 8.050727.4053 Microsoft Visual C++ 2005 Redistributable This is the installation log: Exists: evaluating... [10/21/2011, 22:17:14]MsiGetProductInfo with product code {3C3901C5-3455-3E0A-A214-0B093A5070A6} found no matches [10/21/2011, 22:17:14] Exists evaluated to false [10/21/2011, 22:15:50]calling PerformAction on an installing performer [10/21/2011, 22:15:50] Action: Performing actions on all Items... [10/21/2011, 22:15:50]Wait for Item (clr_optimization_v2.0.50727_32) to be available [10/21/2011, 22:15:50]clr_optimization_v2.0.50727_32 is now available to install [10/21/2011, 22:15:50]Creating new Performer for ServiceControl item [10/21/2011, 22:15:50] Action: ServiceControl - Stop clr_optimization_v2.0.50727_32... [10/21/2011, 22:15:50]ServiceControl operation succeeded! [10/21/2011, 22:15:50] Action complete [10/21/2011, 22:15:50]Error 0 is mapped to Custom Error: [10/21/2011, 22:15:50]Wait for Item (Windows6.0-KB956250-v6001-x86.msu) to be available [10/21/2011, 22:15:51]Windows6.0-KB956250-v6001-x86.msu is now available to install [10/21/2011, 22:15:51]Created new DoNothingPerformer for File item [10/21/2011, 22:15:51]No CustomError defined for this item. [10/21/2011, 22:15:51]Wait for Item (Windows6.1-KB958488-v6001-x86.msu) to be available [10/21/2011, 22:15:51]Windows6.1-KB958488-v6001-x86.msu is now available to install [10/21/2011, 22:15:51]Created new DoNothingPerformer for File item [10/21/2011, 22:15:51]No CustomError defined for this item. [10/21/2011, 22:15:51]Wait for Item (netfx_Core.mzz) to be available [10/21/2011, 22:15:52]netfx_Core.mzz is now available to install [10/21/2011, 22:15:52]Created new DoNothingPerformer for File item [10/21/2011, 22:15:52]No CustomError defined for this item. [10/21/2011, 22:15:52]Wait for Item (netfx_Core_x86.msi) to be available [10/21/2011, 22:15:52]netfx_Core_x86.msi is now available to install [10/21/2011, 22:15:52]Creating new Performer for MSI item [10/21/2011, 22:15:52] Action: Performing Action on MSI at F:\DOCUME~1\Owner\LOCALS~1\Temp\Microsoft .NET Framework 4 Client Profile Setup_4.0.30319\netfx_Core_x86.msi... [10/21/2011, 22:15:52]Log File F:\DOCUME~1\Owner\LOCALS~1\Temp\Microsoft .NET Framework 4 Client Profile Setup_20111021_221545515-MSI_netfx_Core_x86.msi.txt does not yet exist but may do at Watson upload time [10/21/2011, 22:15:52]Calling MsiInstallProduct(F:\DOCUME~1\Owner\LOCALS~1\Temp\Microsoft .NET Framework 4 Client Profile Setup_4.0.30319\netfx_Core_x86.msi, EXTUI=1 [10/21/2011, 22:17:14]MSI (F:\DOCUME~1\Owner\LOCALS~1\Temp\Microsoft .NET Framework 4 Client Profile Setup_4.0.30319\netfx_Core_x86.msi) Installation failed. Msi Log: Microsoft .NET Framework 4 Client Profile Setup_20111021_221545515-MSI_netfx_Core_x86.msi.txt [10/21/2011, 22:17:14]PerformOperation returned 1603 (translates to HRESULT = 0x80070643) [10/21/2011, 22:17:14] Action complete [10/21/2011, 22:17:14]OnFailureBehavior for this item is to Rollback. [10/21/2011, 22:17:14] Action: Performing actions on all Items... [10/21/2011, 22:17:14] Action complete [10/21/2011, 22:17:14] Action complete [10/21/2011, 22:17:14]Final Result: Installation failed with error code: (0x80070643), "Fatal error during installation. " (Elapsed time: 0 00:01:29). [10/21/2011, 22:17:41]WM_ACTIVATEAPP: Focus stealer's windows WAS visible, NOT taking back focus SECOND LOG REQUESTED BELOW: MSI (s) (6C:EC) [22:17:13:828]: Invoking remote custom action. DLL: F:\WINDOWS\Installer\MSIBB4.tmp, Entrypoint: NgenUpdateHighestVersionRollback MSI (s) (6C:64) [22:17:13:984]: Executing op: ActionStart(Name=CA_NgenRemoveNicPFROs_I_DEF_x86.3643236F_FC70_11D3_A536_0090278A1BB8,,) MSI (s) (6C:64) [22:17:13:984]: Executing op: ActionStart(Name=CA_NgenRemoveNicPFROs_I_RB_x86.3643236F_FC70_11D3_A536_0090278A1BB8,,) MSI (s) (6C:64) [22:17:13:984]: Executing op: CustomActionRollback(Action=CA_NgenRemoveNicPFROs_I_RB_x86.3643236F_FC70_11D3_A536_0090278A1BB8,ActionType=17729,Source=BinaryData,Target=NgenRemoveNicPFROs,) MSI (s) (6C:AC) [22:17:13:984]: Invoking remote custom action. DLL: F:\WINDOWS\Installer\MSIBB5.tmp, Entrypoint: NgenRemoveNicPFROs MSI (s) (6C:64) [22:17:14:000]: Executing op: End(Checksum=0,ProgressTotalHDWord=0,ProgressTotalLDWord=0) MSI (s) (6C:64) [22:17:14:000]: Error in rollback skipped. Return: 5 MSI (s) (6C:64) [22:17:14:015]: No System Restore sequence number for this installation. MSI (s) (6C:64) [22:17:14:015]: Unlocking Server MSI (s) (6C:64) [22:17:14:015]: PROPERTY CHANGE: Deleting UpdateStarted property. Its current value is '1'. MSI (s) (6C:64) [22:17:14:031]: Note: 1: 1708 MSI (s) (6C:64) [22:17:14:031]: Product: Microsoft .NET Framework 4 Client Profile -- Installation failed. MSI (s) (6C:64) [22:17:14:078]: Cleaning up uninstalled install packages, if any exist MSI (s) (6C:64) [22:17:14:078]: MainEngineThread is returning 1603 MSI (s) (6C:EC) [22:17:14:171]: Destroying RemoteAPI object. MSI (s) (6C:9C) [22:17:14:171]: Custom Action Manager thread ending. MSI (c) (F4:C4) [22:17:14:203]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (F4:C4) [22:17:14:203]: MainEngineThread is returning 1603 === Verbose logging stopped: 10/21/2011 22:17:14 ===

    Read the article

  • Not able to run discovery in XenApp 6.5

    - by BDAA
    I am installing a fresh Citrix farm. I installed XenApp 6.5 and configured while configuring the XenApp, added domain\ctxadmin user as the farm administrator. Now when i log into Administrator account into the XenApp server and run discovery it says This user account is not an administrator of this farm, or there was a problem contacting the data store. Check that the data store server for the Citrix XenApp farm is online, and verify that your account is configured and enabled as an administrator on the farm Then I tried to RDP into the XenApp server as ctxadmin user and now I get an error "The Desktop you are trying to open is currently unavailable. Contact you system administrator to confirm the correct settings are in place for your client connection" I believe starting from XenApp version 6.x, once XenApp is installed, then a Citrix Policy needs to be changed as given in http://support.citrix.com/article/CTX124745 But for changing the Citrix policy I need to log into AppCenter which I am not able to do so as I am not able to run discovery as given above. So I am caught up in an end-less loop. Any help would be greatly appreciated

    Read the article

  • Cannot login to SQL Server 2008 R2 with Windows authentication

    - by Ian Boyd
    When i try to connect to SQL Server (2008 R2) using Windows authentication: i cannot: Checking the Windows Application event log, i find the error: Login failed for user 'AVATOPIA\ian'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: ] Log Name: Application Source: MSSQLSERVER Event ID: 18456 Level: Information User: AVATOPIA\ian OpCode: Task Category: Logon i can login to the computer itself using Windows authentication. i can log into SQL Server using the local Windows Administrator account. We can connect to 8 other SQL Servers on the domain using Windows Authentication. Just this one, whitch is the only one that is 2008 R2 is failing. So i assume it's a bug with *2008 R2. Note: i cannot logon locally, or remotely, using Windows authentication. i can login locally and remotely using SQL Server Authentication. Update Note: It's not limited to SQL Server Management Studio, standalone applications that connect using Windows authentication: fail: Note: It's not a client problem, as we can connect fine to other (non-SQL Server 2008 R2 machines): i'm sure there's a technote or knowledge base article describing why SQL Server 2008 R2 is broken by default, but i can't find it. Update 2 Matt figure out the change that Microsoft made so that SQL Server 2008 R2 is broken by default: Administrators are no longer administrators All that remains is to figure out how to make Administrators administrators. One of these days i'm going to start a list of changes around Microsoft's "broken by default" initiative. Steps to reproduce the problem How do i add a group to the sysadmin fixed server role? Here's the steps i try, that don't work: Click Add: Click Object Types: Ensure that you have no ability to add groups: and click OK. Under Enter the object names to select, enter Administrators: Click Check Names, and ensure that you are not allowed to add groups: and click Cancel. Click Browse..., and ensure that you have no ability to add groups: You should now still not have added any group to the sysadmin role. Additional information SQL Server Management Studio is being run as an administrator: SQL Server is set to use Windows Authentication: tried while logged into SQL with both sa and the only other sysadmin domain account (screenshot can be supplied for those who don't believe)

    Read the article

  • Oracle OpenWorld Preview: Oracle Social Network Technical Tour

    - by kellsey.ruppel
      Originally posted by Jake Kuramoto on The Apps Lab blog. Yesterday, I told you about the Oracle Social Network Developer Challenge we’ll be hosting at OpenWorld (@oracleopenworld) next week. If you’re attending OpenWorld or JavaOne (@javaoneconf) and want to get hands-on experience with Oracle Social Network and show off your coding chops, this is the event for you. Go ahead and register. I’ll wait. But wait, there’s more. If you’re not sure you’ll have the time for the Challenge, don’t want to embarrass anyone with your awesome skills, have a hectic schedule and can’t commit, or just want to learn more about Oracle Social Network and how to extend it, then the Oracle Social Network Technical Tour is for you. Read Jake's originally entry to learn more about The Tour!

    Read the article

  • Unity Plugin DLLNotFoundException

    - by Dewayne
    I am using a plugin DLL that I created in Visual C++ Express 2010 on windows 7 64 bit Ultimate Edition. The DLL functions properly on the machine that it was originally created on. The problem is that the DLL is not functioning in the Unity3d Editor on another machine and giving an error that basically states that the DLL is missing some of its dependencies. The target machine is running Windows 7 Home 64 bit (if this is relevant) Results from the error log of Dependency Walker: Error: The Side-by-Side configuration information for "c:\users\dewayne\desktop\shared\vrpnplugin\unityplugin\build\release\OPTITRACKPLUGIN.DLL" contains errors. The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail (14001). Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. Error: Modules with different CPU types were found. Warning: At least one delay-load dependency module was not found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module. The Visual C++ Express 2010 project and solution file can be found here: https://docs.google.com/leaf?id=0B1F4pP7mRSiYMGU2YTJiNTUtOWJiMS00YTYzLThhYWQtMzNiOWJhZDU5M2M0&hl=en&authkey=CJSXhqgH The zip is 79MB and also contains its dependencies. The DLL in question is OptiTrackPlugin.dll

    Read the article

  • VirtualHosts Stopped Working

    - by Kevin C.
    I'm working on a website and have WAMP setup for local testing. Usually I set up virtual hosts using httpd-vhosts + the hosts file without a hitch. All of a sudden, my virtual hosts are no longer working. I know that it's pointing to Apache because I get a '403 Forbidden' error, but that's about it. All of my previously working virtual hosts no longer work as well. Anybody know what's going on? httpd-vhosts.conf <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "C:\Documents and Settings\kevin\Desktop\websites\fusion" ServerName ebrochures ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common <directory "C:\Documents and Settings\kevin\Desktop\websites\fusion"> Options Indexes FollowSymLinks AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 </directory> hosts file: 127.0.0.1 fusion And yes, I am including the virtual hosts file in my httpd.conf file: # Virtual hosts Include conf/extra/httpd-vhosts.conf

    Read the article

  • pcap stream rotation and pruning

    - by pilcrow
    Some of my servers collect a lot of packet data. Is there a utility (or patch to tcpdump(1)) to log a pcap stream to disk which: Rotates based on size of data written Prunes written files, keeping only the N most recent Does not re-use output filenames Is self-contained (Ruling out, e.g., a rotation with external pruning via crond(8)+tmpwatch(8)) Basically I want a multilog or svlogd that groks the pcap record format. The -W filecount option of tcpdump-4.0.0 "prunes" by recycling old filenames, which violates #3 above, forcing me to consult mtimes to determine recency and providing no guarantees against surprise truncation of the log file. The -G option introduces strftime(2)-specifier support in output filenames, which would give me at least second-precision in file names, but I can't figure out how to get pruning to work with this scheme.

    Read the article

  • IPv6 local address in hosts file

    - by Dan
    I have set up a local domain on my Apache server. Then I added the following line in my /etc/hosts file ::1 exampledomain.local After I trying to navigate to it, (I tried Firefox and Chromium) I got a server not found error. Then I tried ping6 and it worked: dan@danny:~$ ping6 exampledomain.local PING exampledomain.local(exampledomain.local) 56 data bytes 64 bytes from exampledomain.local: icmp_seq=1 ttl=64 time=0.032 ms If I replace ::1 with 127.0.0.1 in my hosts file, it works fine. I'm not sure if this is relevant but this is my Virtual Host configuration in Apache2: <VirtualHost *:80> ServerAlias exampledomain.local DocumentRoot /home/dan/sites/exampledomain <Directory /home/dan/sites/exampledomain> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/exampledomain-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog ${APACHE_LOG_DIR}/exampledomain-access.log combined </VirtualHost> My question is: How can I make it work with the IPv6 address?

    Read the article

  • slow query logging in mysql server

    - by Vinayak Mahadevan
    Hi I have installed MySQL Community Edition 5.1.41 on a windows 2000 server. In my.ini file I have enabled slow query logging and have redirected the output to a table.I have set the long_query_time to 10 seconds. Then after running some queries I checked up the slow query log table and found that all the queries which were executed have been logged and a file called database-slow.log has also been created in the data folder. Can anybody please tell me where I am going wrong. I am using the inbuilt innodb and not activated the innodb plugin. Thanks

    Read the article

  • HP ML350G6 running hyper-V 2008 r2 resets itself every 2 hours

    - by GT
    The system started resetting itself exactly every 2 hours. These are the messages in the iLO2 log: Informational iLO 2 03/07/2010 20:40 03/07/2010 20:40 1 Server power restored. Caution iLO 2 03/07/2010 20:40 03/07/2010 20:40 1 Server reset. It's not an ASR reset (that would show in the log) Redundant power supplies, swapped but no change. Turned off all virtual machines (i.e. now only running hypervisor) but not OK Boot HP smartstart diagnostics disk, ALL OK Diagnostic disk reports no errors Went back to booting Hypervisor and the problem is back. Seems the hyper-V system disk has got a time based program (virus) causing the reset. I thought the hypervisor had a small attack surface and should be OK. All virtual machines (SBS2008, Win7 and Win XP) and network computers are protected with TrendMicro WFBS. I am about to rebuild the disk (I have backups) but wondered if there were any suggestions to try first???

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >