Search Results

Search found 24784 results on 992 pages for 'process integration packs'.

Page 642/992 | < Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >

  • CentOS ISO DVD self-disc-check failing

    - by Jakobud
    I downloaded a CentOS 5.4 DVD ISO from one of the official mirrors. The MD5 sum is correct for the iso file that was downloaded. I burned the ISO onto a DVD+R using ImgBurn. I got no errors during the burning process and it says it finished burning successfully. When booting a server using this DVD to install CentOS on it, the first thing I did was run the self-disc-check (where it checks the files to make sure its genuine, etc). It failed. Didn't really say why. Am I doing something wrong here? Or is there someplace I can see why its failing? If the MD5 is correct and I don't get any burning errors, how can it be failing its own self-check mechanism?

    Read the article

  • 500 internal server error

    - by Rockr
    I am facing 500.0 Internal server quite frequently with my website. The error details are given below. HTTP Error 500.0 - Internal Server Error C:\PHP\php-cgi.exe - The FastCGI process exceeded configured activity timeout Module FastCgiModule Notification ExecuteRequestHandler Handler PHP_via_FastCGI Error Code 0x80070102 Requested URL http://mydomain.com:80/index.php Physical Path C:\HostingSpaces\coderefl\mydomain.com\wwwroot\index.php Logon Method Anonymous Logon User Anonymous When I contacted the support team, they're saying that my site is making heavy SQL Queries. I am not sure how to debug this. But my site is very small and the database is optimized. I'm running wordpress as platform. How to resolve this issue?

    Read the article

  • Why do my websites have a first page rank on Bing and Yahoo but not Google? [closed]

    - by Linda Cullum
    I have 3 websites suffering from a drop in ranking with Google and hence a huge drop in traffic. The instant drop ocurred in September and I have not been able to remedy it. For the past 6-10 years my main website http://LearnToSail.Net has ranked from #3 to #1 on the 1st page of Google and all the other engines with the search term "learn to sail" Now it shows on the 1st page of Bing and Yahoo but does not show up on ANY pages of Google. The only way it does come up is if I add "cd" to the "learn to sail" phrase. We sell a sailing cd on that website. The other websites are http://LearnToSailOnLine.com ..search terms are "learn to sail online" or learntosailonline and historyofthepilgrims.com search terms are "history of the pilgrims" "historyofthepilgrims" I get the same result. Gone on Google but 1st pages for Bing and Yahoo. I have researched, edited,updated blogs, made sitemaps, prayed to the universe and use Google Webmaster tools but nothing is changing and I have lost alot of business. I host with 1and1.com and have been back and forth with them but to no avail and no change in traffic. I thought maybe some DNS mapping was off. I used to have alot of traffic now I have hardly any. Any advice would be greatly appreciated. I am still in the process of working on the issue of course! This is a really great website here and I am glad I came across it. Thank you, LS Cullum Little Pines Multimedia

    Read the article

  • SATA controller installed but not working? (No drives show up/Don't see card's BIOS)

    - by johnnycakes
    Hi, I have an old Promise FastTrak S150 TX4 SATA controller card. I put it in an old machine running Windows Server 2003. I booted the machine. The new hardware was detected. I installed the drivers. So now in Device Manager under "SCSI and RAID Controllers" I see "Win Server 2003 Promise FastTrak S150 TX4 Controller" and "Win Server 2003 Promise RAID Console SCSI Processor Device" I previously had the card in a machine that is now dead. When I booted that machine, during the boot process I would see the card info displayed and the drives that were attached. Boot would finish and my drives would be available. When I boot this new machine I never see that screen/text. No hard drives are available/visible. What am I missing? Thanks.

    Read the article

  • When NOT to use a framework

    - by Chris
    Today, one can find a framework for just about any language, to suit just about any project. Most modern frameworks are fairly robust (generally speaking), with hour upon hour of testing, peer reviewed code, and great extensibility. However, I think there is a downside to ANY framework in that programmers, as a community, may become so reliant upon their chosen frameworks that they no longer understand the underlying workings, or in the case of newer programmers, never learn the underlying workings to begin with. It is easy to become specialized to a degree that you are no longer a 'PHP programmer' (for example), but a "Drupal programmer", to the exclusion of anything else. Who cares, right? We have the framework! We don't need to know how to "do it by hand"! Right? The result of this loss of basic skills (sometimes to the extent that programmers who don't use frameworks are viewed as "outdated") is that it becomes common practice to use a framework where it is not required or appropriate. The features the framework facilitates wind up confused with what the base language is capable of. Developers start using frameworks to accomplish even the most basic of tasks, so that what once was considered a rudimentary process now involves large libraries with their own quirks, bugs, and dependencies. What was once accomplished in 20 lines is now accomplished by including a 20,000 line framework AND writing 20 lines to use the framework. Conversely, one does not want to reinvent the wheel. If I'm writing code to accomplish some basic, common little task, I might feel like I am wasting my time when I know that framework XYZ offers all the features I am after, and a whole lot more. The "whole lot more" part still has me worried, but it doesn't seem that many even consider it anymore. There has to be a good metric to determine when it is appropriate to use a framework. What do you consider the threshold to be, how do you decide when to use a framework, or, when not.

    Read the article

  • root locked out of EC2

    - by Paco
    I was in the process of disabling root logins on an AWS EC2 instance. Right after setting PermitRootLogin no and restarting sshd, I closed the terminal on accident -- before setting up users with sudo privileges. The result is that my key to get into the instance as root does not work (sshd forbids it) and when I log into the instance using my regular user I can't gain root privileges (the root password was never set). The instance is running ubuntu 8.10. Anyone have any idea how can I fix this?

    Read the article

  • Duplicated menu, panel indicators and taskbar

    - by Mykro
    Ubuntu 12.04. The first time I log into Gnome Classic I get a duplicate of every menu, panel indicator and taskbar entry. If I log out and back in again I now have three copies. Log out and log in, four copies and so forth. What would cause this? I can't see any obvious duplicates in the process list: $ ps -A | grep 'gno\|org\|nau' 27439 tty7 00:00:18 Xorg 27610 ? 00:00:00 gnome-keyring-d 27621 ? 00:00:00 gnome-session 27674 ? 00:00:00 gnome-settings- 27709 ? 00:00:07 gnome-panel 27720 ? 00:00:00 gnome-fallback- 27726 ? 00:00:04 nautilus 27736 ? 00:00:00 polkit-gnome-au 28281 ? 00:00:00 gnome-screensav 29016 ? 00:00:00 gnome-terminal 29021 ? 00:00:00 gnome-pty-helpe If it helps, I was recently trying the Nouveau drivers but have now reverted to NVIDIA. Configuration is Separate X Window, Xinerama enabled. Unity is fine, so the problem is limited to the Classic desktop. I haven't had any luck googling these particular symptoms so any tips would be really appreciated. Thanks!

    Read the article

  • Is there a way to set up message moderation in Exchange 2007?

    - by Nate Pinchot
    Is there a way to get a feature in Exchange 2007 similar to message moderation in Exchange 2010 through the use of third party tools or otherwise? I've Googled things like "exchange 2007 outbound email approval" to no avail. We are working on getting Exchange 2010 implemented but I need an interim solution if at all possible. The reason for this is from a customer service perspective. I am willing to use a small process to be a smart host if needed. I would appreciate any suggestions or advice. Edit: My apologies, I should have been more clear that I am trying to moderate/approve outgoing email from certain users, not moderate/approve email sent to a distribution group.

    Read the article

  • cpu usage nearly 100 constantly windows xp sp3

    - by user33882
    When running some heavy applications like games or virtual box for some time.. the cpu usage is normal for some 15 mins and then suddently cpu usage increases. even after i quit the heavy apps and when i start some other apps, the cpu usage of the new opened application is also very high.. this continues until i reboot the system. There is single particular process occupy more cpu. All the processes cpu usage is little high than normal.. Any solutions?

    Read the article

  • Putting our OLTP and OLAP services on the same cluster

    - by Dynamo
    We're currently in a bit of a debate about what to do with our scattered SQL environment. We are setting up a cluster for our data warehouses for sure and are now in the process of deciding if our OLTP databases should go on the same one. The cluster will be active/active with database services running on one node and reporting and analytical services on the other node. From a technical standpoint I don't see an issue here. With the services being run on different nodes they shouldn't compete too heavily for resources. The only physical resource that may be an issue would be the shared disk space. Our environment is also quite small. Our biggest OLAP database at the moment is only about 40GB and our OLTP are all under 10GB. I see a potential political issue here as different groups are involved but I'm just strictly wondering if there would be any major technical issues that could arise from this setup.

    Read the article

  • Why does my iTunes use so much CPU time?

    - by bikesandcode
    I have a roughly 2 year old Macbook (10.5). I have iTunes 10. When iTunes is playing MP3s, I see CPU usage of the iTunes process in the system monitor ranging from 65%-75%. When I pause the music, I see CPU usage of about 65%-75%. I do not have any visualisations going, to my knowledge I have not turned on any CPU destroying features, my music library isn't tiny, but it's hardly huge (3GB). This is mildly annoying when I'm plugged into the wall as I only have slightly longer compile times, but if I am out and about, this is a major drain on the battery. Using VLC I see CPU loads of ~= 10% at the most when listening to music and generally lower. What the heck is iTunes doing?

    Read the article

  • How to make a iOS plugin for Unity3d

    - by DannoEterno
    I've passed last 2 days reading articles and book for understand how can i make a plugin for iOS in Unity. Basically i need just a demo for understand how it work. For now i've tried to make this process (with really poor luck): I've started a new project in Unity and writed a simple script using UnityEngine; using System.Collections; using System; using System.Runtime.InteropServices; public class CallPlugin : MonoBehaviour { [DllImport ("__Internal")] private static extern int test(); void Start () { Debug.Log(test()); } } Then i've created a project in Xcode with this simple script: extern "C"{ int test() { int che = 5; return che; } } Then i've tried: to put the .mm and .h in the Assets/Plugins/iOS = nothing to build the unity project and than add the .h and .mm in the Xcode project = nothing In Unity i will always get the EntryPointNotFoundException, so unity see the file but is unable to reach the method. The problem is... how?! :) Maybe i miss something or i've done something wrong? Thanks a lot for every help that you can give me :)

    Read the article

  • Backup / Disaster Recovery, should I store RAR-compressed files?

    - by moraleida
    I'm in the process of recovering files from an accidentally formated Ext4 partition using Photorec. It had about 300Gb of data, of which I've already got hold of about 30Gb. So far, it seems to me that the recovery of RAR-compressed files has been much more successful than the recovery of individual uncompressed files and ZIP compressed files - in the sense that a lot of recovered files/zips were unreadable, and pretty much all of the RAR files were intact. Is there such a relation? Are RAR-compressed files really less prone to corruption and thus easier to recover?

    Read the article

  • BizTalk 2009 - How do I do t"HAT"?

    - by StuartBrierley
    In my previous life working with BizTalk Server 2004, I came to view HAT (the Health and Activity Tracking tool) as one of my first ports of call in the case of problems with any of our BizTalk solutions.  When you move to BizTalk Server 2009 it is quickly apparent that HAT is no longer with us. HAT was useful in BizTalk 2004 mainly as it provided developers and administrators with a number of useful queries and views of what was going on inside BizTalk at runtime; when and what type of messages were received and sent, what messages had been suspended, what orchestration were running or suspended, you could even follow the process flow of a message or orchestration to see what was going on. With BizTalk Server 2009 much of the functionality of HAT can now be found in the BizTalk Administration console.  Select a BizTalk Group and you will be shown the Group Hub Overview page.  This provides a number of default queries that replicate some of those found in the old HAT. You can also use the Group Hub page to create new queries.  These can then be saved and loaded in other Group Hub instances - useful for creating queries in development for later use in Test, Psuedo-Live and Live environments. In the next few posts I am going to look at some of the common queries that we might miss from HAT and recreate them (or something close) using the new query option. Messages - last 100 received Messages - last 100 sent Messages - last 50 suspended Service instances - last 100 I have yet to try the updated Admin-HAT-Console in anger, and after using old-HAT for so long it may take some getting uesd to, but so far I would say that moving the HAT functionality into the BizTalk Administration console was probably the correct way to go.  Having one tool as the place to look for the combined functionality on offer certainly seems to be the sensible option.

    Read the article

  • Implementing my Entity System. Questions about some problems I have found.

    - by Notbad
    Hi!, Well during this week I have deciding about implementation of my entity system. It is a big topic so it has been difficult to take one option from the whole. This has been my decision: 1) I don't have an entity class it is just an id. 2) I have systems that contain a list of components (the list is homegenous, I mean, RenderSystem will just have RenderComponents). 3) Compones will be just data. 4) There would be some kind of "entity prototypes" in a manager or something from we will create entity instances.Ideally they will define the type of components it has and initialization data. 5) Prototype code to create an entity (this is from the top of my head): int id=World::getInstance()->createEntity("entity template"); 6) This will notify all systems that a new entity has been created, and if the entity needs a component that the system handles it will add it to the entity. Ok, this are the ideas. Let's see if some can help with the problems: 1) The main problem is this templates that are sent to the systems in creation process to populate the entity with needed components. What would you use, an OR(ed) int?, a list of strings?. 2) How to do initialization for components when the entity has been created? How to store this in the template? I have thought about having a function in the template that is virtual and after entity is created an populated, gets the components and sets initialization values. 3) Don't you think this is a lot of work for just an entity creation?. Sorry for the long post, I have tried to expose my ideas and finding in order other could have a start beside exposing my problems. Thanks in advance, Notbad.

    Read the article

  • New Success Story: McGrath RentCorp Improves Business Reporting and Analytics Capabilities with Cloud-based Business Intelligence Solution

    - by LanaProut
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} McGrath RentCorp worked with Jade Global, an Oracle Platinum Partner, to scope, design, and execute the deployment, using its Oracle Accelerate solution to jumpstart the process and accelerate the time to value. Click here to view the full story.

    Read the article

  • One vs. many domain user accounts in a server farm

    - by mjustin
    We are in a migration process of a group of related computers (Intranet servers, SQL, application servers of one application) to a new domain. In the past we used one domain user account for every computer (web1, web2, appserver1, appserver2, sql1, sqlbackup ...) to access central Windows resources like network shares. Every computer also has a local user account with the same name. I am not sure if this is necessary, or if it would be easier to configure and maintain to use one domain user account. Are there key advantages / disadvantages of having one single user account vs. dedicated accounts per computer for this group of background servers? If I am not wrong, one advantage besides easier administration of the user account could be that moving installed applications and services around between the computers does not require a check of the access rights anymore. (Except where IP addresses or ports are used)

    Read the article

  • How do you plan your asynchronous code?

    - by NullOrEmpty
    I created a library that is a invoker for a web service somewhere else. The library exposes asynchronous methods, since web service calls are a good candidate for that matter. At the beginning everything was just fine, I had methods with easy to understand operations in a CRUD fashion, since the library is a kind of repository. But then business logic started to become complex, and some of the procedures involves the chaining of many of these asynchronous operations, sometimes with different paths depending on the result value, etc.. etc.. Suddenly, everything is very messy, to stop the execution in a break point it is not very helpful, to find out what is going on or where in the process timeline have you stopped become a pain... Development becomes less quick, less agile, and to catch those bugs that happens once in a 1000 times becomes a hell. From the technical point, a repository that exposes asynchronous methods looked like a good idea, because some persistence layers could have delays, and you can use the async approach to do the most of your hardware. But from the functional point of view, things became very complex, and considering those procedures where a dozen of different calls were needed... I don't know the real value of the improvement. After read about TPL for a while, it looked like a good idea for managing tasks, but in the moment you have to combine them and start to reuse existing functionality, things become very messy. I have had a good experience using it for very concrete scenarios, but bad experience using them broadly. How do you work asynchronously? Do you use it always? Or just for long running processes? Thanks.

    Read the article

  • Force Run System File Checker (SFC.exe)

    - by Zuck
    C:\Windows\system32>sfc /scannow Beginning system scan. This process will take some time. There is a system repair pending which requires reboot to complete. Restart Windows and run sfc again. Is there a way I can force SFC to scan again and make the new fixes on reboot? Probably by deleting some file or something? I removed C:\Windows\WinSxS\pending.xml by taking ownerships but it still shows the same message.

    Read the article

  • optimizing mod_fcgid for a dediciated site

    - by Mike Williams
    i'm using mod_fcgid and I'm trying to find resources on how i can optimize it for running a dedicated website but have had no luck... so far i have: ive got apache2 running and im trying to have php processes spawned and always running so apache does not have to keep spawning them. # Fastcgi configuration for PHP5 LoadModule fcgid_module modules/mod_fcgid.so MaxRequestsPerProcess 5000 # Maximum number of PHP processes. MaxProcessCount 8 # Number of seconds of idle time before a process is terminated IPCCommTimeout 1800 IdleTimeout 1800 AddHandler fcgid-script .php5 .php4 .php .php3 .php2 .phtml FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .php5 FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .php4 FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .php FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .php3 FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .php2 FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .phtml

    Read the article

  • is there any way to lock few Windows Registry enteries

    - by Moorage
    I have seen that most of virus , spyware etc changes few registry files which are linked to boot process or which starts when window loads user settings. Is there any way to lock those files which are linked to start the system like explorer.exe , userinit.exe so that virus at least should not be able to stop the system to start up. Why did'nt microsoft put those registry file separately so that nothing can touches them Now my userinit.exe file is affected and its not letting me logn on to computer. I get blank desktop but system loads during safe mode. I have run anti virus bootable cd but still have not found solution

    Read the article

  • Database commands

    - by user12609425
    Ops Center has two database options - you can have Ops Center automatically install a database on the Enterprise Controller system, or you can use your own database on any system you choose. If you use your own database, it's obviously important to make sure that this database is running smoothly. You have a few tools that can help you do this. The first is the ecadm command. This command has a variety of subcommands that let you view and control the status of the Enterprise Controller. Two subcommands in particular are relevant to the database: ecadm verify-db: This subcommand verifies that the database is reachable and that the schemas are configured with the proper permissions. Use the -v option if you want more details; the command is normally terse if the DB is configured correctly. ecadm sqlplus -r: This subcommand opens an sqlplus console connection to the database. The -r option makes this console read-only, which isn't necessary, but is generally a good idea. You can also view the database contents using Oracle SQL Developer or other tools. The Accessing Core Product Data how-to describes this process.

    Read the article

  • Storage of leftover values in a situation of having to round down

    - by jt0dd
    I'm writing an app (client and server side) where the number of sales required by each employee must be kept track of in round-number form. Each month, the employees are required to sell a certain number, and this app needs to keep track of how many sales must be made for each 12 hour interval during the work week. Because I have to round the values down to a whole number, I must keep track of leftovers in the rounding process and ensure that they are always carried over. My method must ensure the storage of the leftover value even when client and server side crash, restart, close, etc. Right now, I'm working on doing this by storing the leftovers in a field in the user's account row in the database each time a value is rounded, reading the stored value, removing any portion that is used (when a whole number is reached, most of the leftover is used up), and storing the new value. This practice seems weird because while the leftovers are calculated on the client side, it's the same number for each employee, and every employee using the app is storing a copy of the same leftover data. Alternatively, I could have all clients store the data at once into the same data field on a general table, but this is just as weird. Is there a better way that this can be handled or is my method correct?

    Read the article

  • Cheap server stress testing

    - by acrosman
    The IT department of the nonprofit organization I work for recently got a new virtual server running CentOS (with Apache and PHP 5), which is supposed to host our website. During the process of setting up the server I discovered that the slightest use of the new machine caused major performance problems (I couldn't extract tarballs without bringing it to a halt). After several weeks of casting about in the dark by tech support, it now appears to be working fine, but I'm still nervous about moving the main site there. I have no budget to work with (so no software or services that require money), although due to recent cut backs I have several older desktops that I could use if it helps. The site doesn't need to withstand massive amounts of traffic (it's a Drupal site just a few thousand visitors a day), but I would like to put it through a bit of it paces before moving the main site over. What are cheap tools that I can use to get a sense if the server can withstand even low levels of traffic? I'm not looking to test the site itself yet, just fundamental operation of the server.

    Read the article

  • GLES2.0 3D Android game performance and multi threading the update?

    - by Ofer
    I have profiled my mixed Java\C++ Android game and I got the following result: https://dl.dropbox.com/u/8025882/PompiDev/AndroidProfile.png As you can see, the pink think is a C++ functions that updates the game. It does things like updating the logic but it mostly it generates a "request list" for rendering. The thing is, I generate DrawLists on C++ and then send them to Java to process and draw using GLES2.0. Since then I was able to improve update from 9ms down to about 7ms, but I would like to ask if I would benefit from multi threading the update? As I understand from that diagram is that the function that takes the most time is the one you see it's color on the timeline. So the pink area is taken mostly by update. The other area has MainOpenGL.Handle as it's main contributor(whch is my java function), but since it's not drawn to the top of the diagram I can conclude other things are happening at the same time that use the CPU? Or even GPU stuff that isn't shown in this diagram. I am not sure how the GPU works on this. Does it calculate stuff in parallel to the CPU? Or is it part of the CPU usage as in SoC? I am not sure. Anyway, in case GPU things DO happen in parallel to CPU, then I would guess that if I do this C++ Update in parallel to the thread that makes the OpenGL calls, I might make use of "dead CPU time" due to GPU stalling or maybe have the GPU calls getting processed earlier because it won't have to wait for Update to finish? How do you suggest to improve performance based on that? Thanks.

    Read the article

< Previous Page | 638 639 640 641 642 643 644 645 646 647 648 649  | Next Page >