Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 506/903 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • Trouble installing php memcache extension

    - by user35346
    I'm trying to install memcache on MAMP but I get the warning below, and when I continue it seems to complete properly. I add the line extension=memcache.so to the php.ini and restart MAMP but phpinfo() doesn't list the memcache extension. $ ./pecl install memcache downloading memcache-2.2.5.tgz ... Starting to download memcache-2.2.5.tgz (35,981 bytes) ..........done: 35,981 bytes 11 source files, building WARNING: php_bin /Applications/MAMP/bin/php5/bin/php appears to have a suffix 5/bin/php, but config variable php_suffix does not match running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 Enable memcache session handler support? [yes] : yes ... Build process completed successfully Installing '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/memcache.so' install ok: channel://pecl.php.net/memcache-2.2.5 configuration option "php_ini" is not set to php.ini location You should add "extension=memcache.so" to php.ini

    Read the article

  • WCF Service in Windows Services

    - by sivakumar
    I create WCF service library and i test that working fine on WCF Test client(default). when i host the WCF service in winodws service that time i got the error. I am using windows XP sp3, .Net 3.5 and Visual Studio 2008. i got error. Error opening host : HTTP could not register URL "http://+:8731/WCFServerDLL/Service1/." Your process does not have access rights to this namespace (see "http://go.microsoft.com/fwlink/?LinkId=70353" for details). the above link for microsoft i implement the httpcfg. Here i run the "httpcfg.exe set urlacl /u http://localhost:8731/WCFServerDLL/Service1/ /a" i get the result HttpSetServiceConfiguration completed with 0. what is the problem i got same error. can you give me a suggation.

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Update: Commercially Supported GlassFish VersionsAquarium blogger David Delabassee shares background information and links to where you can download the recently released GlassFish Server Bundle Patch 3.1.2.8. Read the article. Announcing WebLogic on Oracle Database Appliance 2.7Oracle WebLogic Server on Oracle Database Appliance 2.7 offers a complete solution for building and deploying enterprise Java EE applications in a fully integrated system of software, servers, storage, and networking that delivers highly available database and WebLogic services. Learn more. APAC Partner iDay: What's New in Oracle WebLogic, 8-Apr 12 noon SG/2pm AEDT/9:30 IST - Invite your Partners - Register Virtual Developer Conference:  Creating a Foundation for Cloud Applications using Oracle WebLogic and Oracle Coherence - OnDemand Webcast: WebLogic Configuration using Chef and Puppet - On-Demand Podcast Series: Part 3 - Oracle WebLogic Server and Oracle Database Integration - Podcast Coherence*Web: Sharing an httpSession Among Applications in Different Oracle WebLogic Clusters SOA solution architect Jordi Villena shows how easy it is to extend Coherence*Web to enable session sharing. Read the article. Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. Read the article. Video: Coherence Community on Java.net - 4 Projects available under CDDL-1.0 Brian Oliver (Senior Principal Solutions Architect, Oracle Coherence) and Randy Stafford (Architect At-Large, Oracle Coherence Product Development) discuss the evolution of the Oracle Coherence Community on Java.net and how you can actively participate in open source Coherence Community projects. Watch the video. Working with Oracle Security Token Service in an Architecture Involving Oracle WebLogic Server and Oracle Service Bus Oracle Fusion Middleware specialist Ronaldo Fernandes takes you step by step through the process of creating a single sign-on between Oracle WebLogic and Oracle Service Bus using Oracle Security Token Service (OSTS) to generate SAML tokens. Read the article. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,

    Read the article

  • Boot failure after update and system crash 11.10

    - by Alubuntu
    I'm using: Dell XPS M1330 laptop, Ubuntu 11.10 32bits, single boot (only Linux OS), Virtualbox (with Windows XP on virtual machine). System crashed while working with a very heavy image on GIMP (and Virtualbox was on). During the same session, I made an automatic system update, but don't know what was exactly updated. After the crash the system doesn't boot. Always freezes on terminal screen but at different stages, ""Starting CUPS printing spooler/server", "Checking battery state", "mountall: plymouth command failed", etc. Sometimes indicates [failed] in some of the processes and sometimes they're all [ok]. Did a Boot info summary with boot-repair and gave me this report http://paste.ubuntu.com/1050743/ This is not a new install, have been using it for almost 6 months...though since last month it couldn't halt..it started shutting down and stopped at black screen with fan on and on/off light still on (I used to finish the shutdown process by forcing halt with a long-press of the power button). I don't know if these has anything to do with the boot failure. Is there anyway I can solve this issue (the boot failure) or make some sort of system check to find out what is the cause of the problem? I wasn't sure if this was the right place to ask the question, so I also did it at https://answers.launchpad.net/ubuntu/+question/201004

    Read the article

  • deploying a Python application from a PHP developer

    - by user1218776
    I'm a little confused on the deployment process for Python. Let's say you create a brand new project with virtualenv source bin/activate pip install a few libraries write a simple hello world app pip freeze the dependencies When I deploy this code into a machine, do I need first make sure the machine is sourced before installing dependencies? I don't mean to sound like a total noob but in the PHP world, I don't have to worry about this because it's already part of the project. All the dependencies are registered with the autoloader in place. The steps would be: rsync the files (or any other method) source bin/activate pip install the dependencies from the pip freeze output file It feels awkward, or just wrong and very error prone. What are the correct steps to make? I've searched around but it seems many tutorials/articles make an assumption that anyone reading the article has past python experience (imo).

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • How to Create a Minimize All Windows (Win + M) Hotkey for Mac OS X

    - by The Geek
    Windows users have been able to minimize every window on their desktop ever since keyboards with the Win key started showing up — just tap WIN + M on your keyboard, and every window is minimized. For Mac OS X, it’s not quite as simple. You can, of course, use the CMD + OPT + H + M shortcut key combination to hide most windows… but that’s a lot of keys to hit at once, and it doesn’t always minimize everything in my experience. So like everything else I wanted from Windows, it was time to figure out how to get it on OS X as well. This method uses QuickSilver to provide the shortcut key trigger — if there’s a better way to do that, please let us know. Luckily OS X includes a nice scripting platform, and we can use the following script from a helpful person over at SuperUser to make this all happen. tell application "Finder" to activate tell application "System Events" tell application process "Finder" tell menu bar 1 click menu item "Hide Others" of menu of menu bar item "Finder" click menu item "Minimize All" of menu of menu bar item "Window" end tell end tell end tell Open up a new AppleScript Editor window and paste in the script from above. Then go to File and Save.    

    Read the article

  • How to relink user folders in Windows 7

    - by Jonathan
    The short story: Win7 lost track of my user folders location (desktop, my documents, my pictures etc...). They now reside on a different partition. How can I relink these folders? The long story: The way I partition my drives is: C: - SSD drive for Windows and Program Files D: - A large regular hard drive for all my user data The first thing I do after a fresh Win7 install is move my user folders to D:, by right clicking on these folders under C:\users\username\, choosing the Location tab and clicking on Move. I've just completed encryption of D: using TrueCrypt. It shows a lot of warnings before the encryption process, but (hrrmm...) it does not mention the fact that after encryption the data is located on a new drive letter, say E: This broke Win7's links to my special user folders. How can I relink these folders?

    Read the article

  • Firefox 3.6.3 on Snow Leopard 10.6.3 - symbolic link to command line binary doesn't work?

    - by David Watson
    I have Firefox 10.6.3 installed on Mac OS X Snow Leopard from the DMG. I can run firefox from the terminal using /Applications/Firefox.app/Contents/MacOS/firefox-bin. However, if I create a symbolic link: sudo ln -s /Applications/Firefox.app/Contents/MacOS/firefox-bin /bin/firefox then it refuses to run, or at least display. When I issue "firefox" from the terminal, I can see the process in top, but never get the GUI to appear. :/ = ls -lr /bin/firefox lrwxr-xr-x 1 root wheel 52 May 5 15:19 /bin/firefox - /Applications/Firefox.app/Contents/MacOS/firefox-bin Any ideas? Thanks, David

    Read the article

  • Linux: find thin server running on port 80 and kill it

    - by Andrew
    On my Linux server I ran: sudo thin start -p 80 -d Now I'd like to restart the sever. The trouble is, I can't seem to get the old process to kill it. I tried: netstat -anp But what I see on port 80 is this: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - So, it didn't give me a PID to kill... I tried pgrep -l thin but that gave me nothing. Meanwhile pgrep -l ruby gives me like 6 processes running. I don't really understand why multiple ruby threads would be running, or which one I need to kill... How do I kill / restart the thin daemon?

    Read the article

  • xna download website source code

    - by Emre Canbazoglu
    I have to download the html code of a web site during the game. I am taking the poster url of a movie from the imdb web site by scrapping the html ( also other informations ). I have to do the download process many times during the game for different movies. I can download and scrap the html but downloading the html takes too much time and it causes the game to slow down(freeze while downloading). How can I solve this problem? My one approach is to download and scrap all the information and store them in a database before the game and during the game access this information from the database. I think this will work properly but that is not what I exactly want. It would be better if it is dynamic. I also thought of using multi-threading but I am a bit confused about how to implement threading in xna. I read some articles about it but it is not so clear. I mean when should I start the thread and what about the update function etc. I need your help guys

    Read the article

  • Under kvm, Vista guest OS install halts on black screen

    - by Isaac Sutherland
    I am using kvm on my ubuntu-server-10.04 amd64 dual core PC. I am trying to install a Windows Vista guest OS. The installation proceeds properly until the system reboot halfway into the installation process, at which point it stops on a black screen and CPU usage goes to near zero. I created the vm with virt-install as follows: virt-install -n vista --connect qemu:///system -r 1024 -vcpus 2 \ --os-type windows --os-variant vista \ --virt-type kvm --accelerate \ -c /dev/sr0 \ --disk path=/dev/main/vista-hd \ --network bridge=br0 \ --vnc --noautoconsole Where /dev/sr0 is the physical drive with the vista installation DVD, and /dev/main/vista-hd is a 20-GB lvm logical volume I created. A number of people seem to have had success installing vista under KVM, but I haven't been able to determine what is causing my problem. Ideas anyone?

    Read the article

  • Representation of data in application versus database

    - by user1815201
    I'm going to make an application that will be given data to put in a database. The data will for the most part be the same, but the way it is formatted will vary a lot (could be in anything from text files to .xls to .doc). I'm not a very experienced developer, but I can see some potential issues and I want to minimize them. First off I have decided to use the DAO pattern, so that I can easily support new file formats or file suddenly formatted in different ways. What I really wonder about though, is how I should manage the data itself within my application. I'm thinking that the database DAO should have models representing each table of the database with the same relations between them, to make the uploading process easy. But should the filesystem DAO's have to use the same models? I can imaging that when the database changes, the change will suddenly propagate throughout the entire system, all DAOs and models alike. And that is obviously a bad thing. I'm a little bit tired and out of time. Will update with what ever questions you have. Thanks!

    Read the article

  • How to clean up an unprocessed orphan inode list?

    - by bmk
    I tried to mount a formerly readonly mounted filesystem read-writeable: mount -o remount,rw /mountpoint Unfortunately it did not work: mount: /mountpoint not mounted already, or bad option dmesg reports: [2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead A umount does not work, too: umount /mountpoint umount: /mountpoint: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point. So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

    Read the article

  • How to zero out a drive?

    - by Mohd Arafat Hossain
    I'm currently running Ubuntu 12.04 and want to completely format my laptop so I can install Windows 7 (replace Ubuntu). When I put in my bootable USB with the OS it just shows a black screen with a white blinking cursor on the top left corner. I wait for hours (to be specific 5 whole hours) waiting for something to happen but I get nothing, so I pull out the USB and my Ubuntu 12.04 loads up. Repeated this several times with putting different boot priority options on top-est that relate to USB but the results are same. I go and visit some sites on the net like this http://www.techspot.com/community/topics/cant-replace-ubuntu-with-windows.175716/ that say I have to zero out my hard drive. My question is how? Note: Erasing out all my data is no problem cause I have nothing important to backup. If I have to lose my primary OS (Ubuntu 12.04) in the process I am ready to as my aim is just to install Windows 7 successfully. Please don't answer that there is something wrong with my USB or my USB reader/port/hardware or the content inside the USB cause they all works fine on my brothers PC as it boots up flawlessly.

    Read the article

  • Database OR Array

    - by rezoner
    What is the exact point of using external database system if I have simple relations (95% querries are dependant on ID). I am storing users and their stats. Why would I use external database if I can have neat constructions like: db.users[32] = something Array of 500K users is not that big effort for RAM Pros are: no problematic asynchronity (instant results) easy export/import dealing with database like with a native object LITERALLY ps. and considerations: Would it be faster or slower to do collection[3] than db.query("select ... I am going to store it as a file/s There is only ONE application/process accessing this data, and the code is executed line by line - please don't elaborate about locking. Please don't answer with database propositions but why to use external DB over native array/object - I have experience in a few databases - that's not the case. What I am building is a client/gateway/server(s) game. Gateway deals with all users data, processing, authenticating, writing statistics e.t.c No other part of software needs to access directly to this data/database.

    Read the article

  • Cloning Win 7 installation from MBR to GPR drive and make it bootable

    - by Nelluk
    I've seen threads on similar topics - such as this one - but the answers never seem to solve how to make it bootable. I have Win 7 64-bit on a PC installed on a 2tb MBR volume. The motherboard is UEFI compatible. I just installed a secondary internal 3TB drive which will be partitioned as GPT. Is there a relatively easy way to clone my installation over to the new drive and have that drive be bootable? I have used EaseUS Partition Master to clone the C volume to the D volume, but that would not boot and I assume the issue is that one is MBR and one is GPT. Is there a process to do this?

    Read the article

  • Caching WCF javascript proxy on browser

    - by oazabir
    When you use WCF services from Javascript, you have to generate the Javascript proxies by hitting the Service.svc/js. If you have five WCF services, then it means five javascripts to download. As browsers download javascripts synchronously, one after another, it adds latency to page load and slows down page rendering performance. Moreover, the same WCF service proxy is downloaded from every page, because the generated javascript file is not cached on browser. Here is a solution that will ensure the generated Javascript proxies are cached on browser and when there is a hit on the service, it will respond with HTTP 304 if the Service.svc file has not changed. Here’s a Fiddler trace of a page that uses two WCF services. You can see there are two /js hits and they are sequential. Every visit to the same page, even with the same browser session results in making those two hits to /js. Second time when the same page is browsed: You can see everything else is cached, except the WCF javascript proxies. They are never cached because the WCF javascript proxy generator does not produce the necessary caching headers to cache the files on browser. Here’s an HttpModule for IIS and IIS Express which will intercept calls to WCF service proxy. It first checks if the service is changed since the cached version on the browser. If it has not changed then it will return HTTP 304 and not go through the service proxy generation process. Thus it saves some CPU on server. But if the request is for the first time and there’s no cached copy on browser, it will deliver the proxy and also emit the proper cache headers to cache the response on browser. http://www.codeproject.com/Articles/360437/Caching-WCF-javascript-proxy-on-browser Don’t forget to vote.

    Read the article

  • How do I choose a package format for Linux software distribution?

    - by Ian C.
    We have a Java-based application that, to date, we've been distributing as a tarball with instructions for deploying. It's mostly self-contained so deployment is fairly straight-forward: Untar on the disk you'd like it to live on; Make sure Java is in your path and a suitable distro and version; Verify ownership and group on all the files Start up the server processes with our start script If the user wants to get in to start-on-boot stuff with SysV we have some written instructions and a template init file for it in our tarball. We'd like to make this installation process a little more seamless; take care of the permissions and the init script deployment. We're also going to start bundling our own JRE with the application so that we're mostly free of external dependencies. The question we're faced with now is: how do we pick a package format for distribution? Is RPM the standard? Can all package management tools deal with it now? Our clients primarily run RHEL and CentOS, but we do have some using SuSE and even Debian. If we can pick a distro-agnostic format we'd prefer that. What about a self-extracting shell script? Something akin to how Java is distributed. If we're dependency-free would the self-extracting script be sufficient? What features or conveniences would we lose out on going with the script versus a proper package format meant for use by a package manager?

    Read the article

  • PeopleSoft Mobile Expenses and Mobile Approvals now available in FSCM 9.1

    - by Howard Shaw
    Oracle is pleased to announce the release of two new applications, PeopleSoft Mobile Expenses and PeopleSoft Mobile Approvals, which are now generally available in PeopleSoft FSCM 9.1. These are the first two of many upcoming applications designed and built to cater directly to the mobile workforce by providing user-friendly access to key business functions on a smartphone or tablet. Enter and Submit Expenses Anytime, Anywhere PeopleSoft Mobile Expenses provides the ability to enter employee expense reports quickly and easily, for busy travelers on the go. The contemporary, streamlined user interface is optimized for mobile devices (that support HTML 5), such as tablets or smartphones, and provides a simple-to-use tool for capturing expenses as they are being incurred, submitting expense reports while waiting at the airport, approving your employees’ expense reports, and more. And since it is part of the PeopleSoft Mobile Applications suite, you don’t have to wait until you return home or to the office, which can lead to improved efficiencies. The user interface and gesture actions (for example, swipe, touch, and so on) will be immediately familiar to mobile device users, and is specifically targeted to keep the experience as streamlined as possible for just the tasks you need to get to while on the go. In addition, PeopleSoft Mobile Expenses leverages all of the powerful expense policy compliance tools delivered by PeopleSoft Expenses, contributing to reduced spend and increased efficiency throughout your organization. PeopleSoft Mobile Expenses is integrated directly with PeopleSoft Mobile Approvals, so managers can quickly approve submitted expense reports in addition to entering or reviewing their own expenses. Manage Approvals Anytime, Anywhere PeopleSoft Mobile Approvals improves productivity and keeps business moving forward when your users are on the go without comprising business imperatives and operational policies. This innovative solution is delivered using the latest HTML 5 technology to allow customers to manage their critical tasks anytime through any device. PeopleSoft Approvals enables your users to approve transactions through the desktop, smart phones or tablet devices. This will speed up the approval process thus avoiding potential late payment penalties and supports early payment discounts for invoices. For more information, please watch the Video Feature Overviews (VFO) available on YouTube (links below) or contact your application sales representative. PeopleSoft Mobile ExpensesPeopleSoft Mobile Approvals The PeopleSoft Mobile Applications 9.1 documentation update for Bundle 23 is available under MOS Document ID 1495035.1.

    Read the article

  • SSH login to Cisco switch using Rancid times out

    - by Lars
    I have a 3560 switch that I have configured to accept SSH logins, and this works fine. However I cannot get Rancid to complete the login process to any of my switches using SSH. I get a timeout error after a minute or so. Telnet logins work fine with the same username and password. Here is my rancid setup in .cloginrc: add user * {myuser} add password * {strongAccessPassword} {strongEnablePassword} add method * ssh telnet Then, when I run bin/clogin 10.10.1.10 I get: # bin/clogin 10.10.1.10 10.10.1.10 spawn ssh -c 3des -x -l myuser 10.10.1.10 ############################################### Please authenticate. ############################################### Password: Error: TIMEOUT reached Again, when I do this using telnet as my preferred mothod in .cloginrc, it works without issue.

    Read the article

  • Windows Upgrade vs Full Install

    - by James Atkinson
    I'm in the process of purchasing a Netbook for use while traveling. The included OS is XP, however, I would like to upgrade(?) to Windows 7. My question: Does a Windows Upgrade have the same physical footprint and performance as a full install? Does an upgrade leave behind non used files/resources that were originally included in XP? If so, are there ways to reduce this? I'm trying to reduce as much OS bloat as possible. Please let me know if my question is unclear. Thanks. Related to http://superuser.com/questions/60646/is-a-clean-install-really-better-than-an-upgrade however, this doesn't address the "leftovers" question.

    Read the article

  • Access Log Files

    - by Matt Watson
    Some of the simplest things in life make all the difference. For a software developer who is trying to solve an application problem, being able to access log files, windows event viewer, and other details is priceless. But ironically enough, most developers aren't even given access to them. Developers have to escalate the issue to their manager or a system admin to retrieve the needed information. Some companies create workarounds to solve the problem or use third party solutions.Home grown solution to access log filesSome companies roll their own solution to try and solve the problem. These solutions can be great but are not always real time, and don't account for the windows event viewer, config files, server health, and other information that is needed to fix bugs.VPN or FTP access to log file foldersCreate programs to collect log files and move them to a centralized serverModify code to write log files to a centralized placeExpensive solution to access log filesSome companies buy expensive solutions like Splunk or other log management tools. But in a lot of cases that is overkill when all the developers need is the ability to just look at log files, not do analytics on them.There has to be a better solution to access log filesStackify recently came up with a perfect solution to the problem. Their software gives developers remote visibility to all the production servers without allowing them to remote desktop in to the machines. They can get real time access to log files, windows event viewer, config files, and other things that developers need. This allows the entire development team to be more involved in the process of solving application defects.Check out their product to learn morehttp://www.Stackify.com

    Read the article

  • Find Content Related Images in Internet Explorer 8

    - by Asian Angel
    Do you want an easy way to find images related to news stories or articles when browsing in Internet Explorer? Then you will definitely want to have a look at the Bing Image Search accelerator. Bing Image Search in Action Two simple steps will have the new accelerator installed: click Add to Internet Explorer to start the process and confirm the installation when the secondary window appears. For the first of our two examples we chose the “Deepwater Horizon rig”. To find images highlight the word/name that you are interested in, and select Bing Image Search in the context menu. Hovering your mouse over the context menu listing will present a “top” image in a popup window… Or if you prefer to view multiple images click on the context menu listing. A Bing image search for the highlighted word/name will be opened in a new tab. Our second example was the Guatemalan volcano “Pacaya”. As before we first viewed the popup window image… Followed by a full image search in a new tab. The accelerator makes it quick and easy to find additional images when needed. Conclusion The Bing Image Search accelerator makes it a simple task to find additional images related to what you are reading or looking at while browsing. Links Add the Bing Image Search accelerator to Internet Explorer 8 Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPPrint Only Selected Text From Web PagesMake Ctrl+Tab in Internet Explorer 7 Use Most Recent OrderRemove ISP Text or Corporate Branding from Internet Explorer Title BarDisable and Remove Suggested Sites From Internet Explorer 8 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version

    Read the article

  • SQL Bits X – Temporal Snapshot Fact Table Session Slide & Demos

    - by Davide Mauri
    Already 10 days has passed since SQL Bits X in London. I really enjoyed it! Those kind of events are great not only for the content but also to meet friends that – due to distance – is not possible to meet every day. Friends from PASS, SQL CAT, Microsoft, MVP and so on all in one place, drinking beers, whisky and having fun. A perfect mixture for a great learning and sharing experience! I’ve also enjoyed a lot delivering my session on Temporal Snapshot Fact Tables. Given that the subject is very specific I was not expecting a lot of attendees….but I was totally wrong! It seems that the problem of handling daily snapshot of data is more common than what I expected. I’ve also already had feedback from several attendees that applied the explained technique to their existing solution with success. This is just what a speaker in such conference wish to hear! :) If you want to take a look at the slides and the demos, you can find them on SkyDrive: https://skydrive.live.com/redir.aspx?cid=377ea1391487af21&resid=377EA1391487AF21!1151&parid=root The demo is available both for SQL Sever 2008 and for SQL Server 2012. With this last version, you can also simplify the ETL process using the new LEAD analytic function. (This is not done in the demo, I’ve left this option as a little exercise for you :) )

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >