Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 494/880 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Merging multiple top-level domains into a single domain

    - by user23089
    My client had multiple top-level-domains. Each one represented an insurance program within a specific vertical. For all the sites at these alternate domains, there was a 30/70 mix of duplicate vs. original content. Some of the alternate domains ranked very well for their target keyphrase groups, where others were absent in results pages. We advised the client to merge multiple domains into their existing main domain, for usability and SEO reasons. We recently ran the merger. Here was our process: On the main domain, transfer the content such that it matches 1-for-1 content on the various alternate domains Setup Google Webmaster Tools on the main domain Push the new content on the main domain live and submit a corresponding sitemap to Google Establish 301 redirects on the alternate domains, such that each alternate domain URL points to its respective page on the main domain We did this 12 days ago, and pages (previously on the alternate domains) that had ranked well on Google have now plummeted or are entirely non-existent. Did we do the right thing by merging multiple top-level domains into a single domain? Is this initial dip in rankings normal? How soon should we expect to see it return to its normal rankings?

    Read the article

  • Firefox: This connection is untrusted + Behind corporate firewall

    - by espais
    I've seen some similar issues strewn throughout Google's results about this, but none seem to be corporate-specific. I continually get the 'This connection is untrusted' screen every time I attempt to log into a secure site...for instance Gmail. This is pretty annoying as sometimes I have to go through the process of adding the exception two or three times before it finally lets me into Gmail. I am behind a corporate firewall, going through an internal proxy server to get to the Internet, so there is no possibility for me to update the firewall...etc. Does anybody know a way around this? Can it simply be disabled (and is that safe)? EDIT I'm going to reopen this question with a bit of new information. I have been using Google Chrome lately until today, and one thing that I noticed was that I never had this issue when using either Chrome or Internet Explorer. Is there something that these other browsers do that I need to manually do in FF?

    Read the article

  • Can't the NetworkManager applet to appear in the Gnome panel in Ubuntu

    - by Nate
    I have researched this problem extensively and I can't seem to find an answer. In Ubuntu 10.04 LTS, I want to connect to my VPN through the NetworkManager applet. I installed all the network manager packages, including the gnome client. I understand I need to add the "Notification Area" to the panel, which I have done. I checked that the NetworkManager is running: nate@nate-desktop:~$ service network-manager status network-manager start/running, process 763 In /etc/NetworkManager/nm-system-settings.conf, I have added managed=true (don't know if this matters, but I saw it suggested on one forum): nate@nate-desktop:~$ more /etc/NetworkManager/nm-system-settings.conf # This file is installed into /etc/NetworkManager, and is loaded by # NetworkManager by default. To override, specify: '--config file' # during NM startup. This can be done by appending to DAEMON_OPTS in # the file: # # /etc/default/NetworkManager # [main] plugins=ifupdown,keyfile [ifupdown] #managed=false managed=true At this point, it looks like NetworkManager is running but it's not appearing in the NotificationArea of the panel. I don't know what else to try. Any ideas?

    Read the article

  • Can't the NetworkManager applet to appear in the Gnome panel in Ubuntu

    - by Nate
    I have researched this problem extensively and I can't seem to find an answer. In Ubuntu 10.04 LTS, I want to connect to my VPN through the NetworkManager applet. I installed all the network manager packages, including the gnome client. I understand I need to add the "Notification Area" to the panel, which I have done. I checked that the NetworkManager is running: nate@nate-desktop:~$ service network-manager status network-manager start/running, process 763 In /etc/NetworkManager/nm-system-settings.conf, I have added managed=true (don't know if this matters, but I saw it suggested on one forum): nate@nate-desktop:~$ more /etc/NetworkManager/nm-system-settings.conf # This file is installed into /etc/NetworkManager, and is loaded by # NetworkManager by default. To override, specify: '--config file' # during NM startup. This can be done by appending to DAEMON_OPTS in # the file: # # /etc/default/NetworkManager # [main] plugins=ifupdown,keyfile [ifupdown] #managed=false managed=true At this point, it looks like NetworkManager is running but it's not appearing in the NotificationArea of the panel. I don't know what else to try. Any ideas?

    Read the article

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • MEncoder Install on Ubuntu

    - by Tauqeer Ahmad
    I am writing this after checking almost all the posts but none of those solved my problem. I am trying to install mencoder to process some videos but there are strange errors coming. For examples when I try sudo apt-get install mencoder the following errors comes out: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mencoder : Depends: mplayer Depends: libasound2 (> 1.0.24.1) Depends: libavcodec53 (>= 4:0.8~beta2-2) but it is not installable or libavcodec-extra-53 (>= 4:0.8~beta2-2) but it is not going to be installed Depends: libavformat53 (>= 4:0.8~beta2-2) but it is not installable or libavformat-extra-53 (>= 4:0.8~beta2-2) but it is not going to be installed Depends: libcdparanoia0 (>= 3.10.2+debian) but it is not installable Depends: libenca0 (>= 1.9) but it is not installable Depends: libfontconfig1 (>= 2.8.0) but it is not installable Depends: libgif4 (>= 4.1.4) but it is not installable Depends: libjpeg8 (>= 8c) but it is not installable Depends: liblzo2-2 but it is not installable Depends: libsmbclient (>= 3.0.24) but it is not installable Depends: libspeex1 (>= 1.2~beta3-1) but it is not installable Depends: libtheora0 (>= 1.0) but it is not installable E: Unable to correct problems, you have held broken packages. Can anyone help to solve this issue. I tried to find static builds of MEncoder but could not.

    Read the article

  • Issues running java in Solaris 9 container

    - by Matthew Watson
    Hi, I have a solaris 9 container built from a physical server using flarcreate. Everything seems fine, except when trying to trying to run any "java -server" process it fails with the following error This is on a Sunfire T1000 machine running Solaris 10 10/09 s10s_u8wos_08a SPARC Running jdk1.5.0_15 Exception java.lang.OutOfMemoryError: requested -4 bytes for size_t in /BUILD_AREA/jdk1.5.0_15/hotspot/src/os/solaris/vm/os_solaris.cpp. Out of swap space? As far as I can tell I'm not actually out of swap space. Running java in client mode works without a problem. Googles only suggestion is related to x86. Any suggestions? Thanks.

    Read the article

  • Is there a way to browse a media server (MediaTomb) with a media player (VLC)?

    - by twig
    I currently have an EEE PC set up with LinuxMint as my media server using MediaTomb. I use VLC as my media player on another Windows computer to watch videos off the media server. It works fine, but the current process is: Open up browser and navigate to folder Find the file I want to play and copy URL Paste URL into VLC and watch. This is fine for me on the PC, but it is a little troublesome for my parents to grasp (or for me to use on the phone). Ideally I'd like to: Open up VLC Browse to the file (using VLC) Click/select to play If there is any solution which is similar to this, please let me know. I'm willing to change the software on both server and client to accommodate (although it somewhat depends on which formats are supported on the server) Side note: I've tried searching online for this but I find a lot of jargon such as "media server/centre", media streaming, DLNA, UPNP and feel that some people are either using them interchangeably or incorrectly.

    Read the article

  • New cloud development workflow using Github, Cloud9ide and CloudFoundry.

    - by weng
    So time is changing towards cloud development/computing. I'm trying to get the new "cloud" workflow based on the services I'm going to use: Github, Cloud9ide and CloudFoundry. Here is what is on my mind: Github acts like a central (main repo) just like yesterday's local filesystem. Every service will base it service upon this main repo. Workflow: Github: I create a new Github repo served as main repo for the project. Cloud9ide. I open my Github repo and write my tests and implementation (BDD/TDD). When I'm ready I save (commit) it to main repo on Github. X: A running instance of Jenkins detects someone has committed and fetches the latest commit, builds, deploys, tests (yeti and/or selenium) and reports if the tests were passed or not. If not, I make another commit til all tests are passing. X: I run the CloudFoundry commands to push the main Github repo to CloudFoundry's server and it will deploy my app automatically. What I'm still confused about is where this X environment will be. On a local server where I have to install Jenkins? Or could I install it on Cloud9ide (when java is supported) or will it be on another cloud service? Also, that X environment has to be able to fetch (clone) the Github repo and run the build scripts. And since the concept of Cloud9ide is very new and there haven't been any other predecessors I really wonder how the workflow will look like. We all know Github's workflow. We now know CloudFoundry's workflow (deploy/scale with a restful API/command line tool). But how Cloud9Ide will operate is still somewhat unclear to me. Someone on Cloud9ide mentioned that there will be buttons like deploy so I can deploy with one click. But that I guess will depend on what services that deploy process will hook up into etc. Could someone enlighten this cloud workflow topic and fill in the gaps. Thanks.

    Read the article

  • How to install network drivers during installation?

    - by Matt
    I have a server that I'm attempting to install windows onto. However, the disk is an iscsi target provided by ipxe. Everything appears to go well until about 3/4 through the install process I get an error about a critical driver missing and the installation is cancelled. I would say the critical driver would be the network card. It's an intel nic and the drivers are not on the windows installation CD. I tried slipstreaming them with RTSevenLite, but after it created the CD it seems it failed to make it bootable. I've also not been successful in making a bootable USB thumb drive or USB HDD. I suspect a buggy bios even though I have the latest. How to install network drivers during installation? Windows used to provide an optional F6 during install feature but this seems to be missing in Windows Servert 2008. Perhaps there is a way to do this, or another method?

    Read the article

  • What is the difference between Row Level Security and RPD security?

    - by Jeffrey McDaniel
    Row level security (RLS) is a feature of Oracle Enterprise Edition database. RLS enforces security policies on the database level. This means any query executed against the database will respect the specific security applied through these policies. For P6 Reporting Database, these policies are applied during the ETL process. This gives database users the ability to access data with security enforcement even outside of the Oracle Business Intelligence application. RLS is a new feature of P6 Reporting Database starting in version 3.0. This allows for maximum security enforcement outside of the ETL and inside of Oracle Business Intelligence (Analysis and Dashboards). Policies are defined against the STAR tables based on Primavera Project and Resource security. RLS is the security method of Oracle Enterprise Edition customers. See previous blogs and P6 Reporting Database Installation and Configuration guide for more on security specifics. To allow the use of Oracle Standard Edition database for those with a small database (as defined in the P6 Reporting Database Sizing and Planning guide) an RPD with non-RLS is also available. RPD security is enforced by adding specific criteria to the physical and business layers of the RPD for those tables that contain projects and resources, and those fields that are cost fields vs. non cost fields. With the RPD security method Oracle Business Intelligence enforces security. RLS security is the default security method. Additional steps are required at installation and ETL run time for those Oracle Standard Edition customers who use RPD security. The RPD method of security enforcement existed from P6 Reporting Database 2.0/P6 Analytics 1.0 up until RLS became available in P6 Reporting Database 3.0\P6 Analytics 2.0.

    Read the article

  • Sync Google Contacts with QuickBooks

    - by dataintegration
    The RSSBus ADO.NET Providers offer an easy way to integrate with different data sources. In this article, we include a fully functional application that can be used to synchronize contacts between Google and QuickBooks. Like our QuickBooks ADO.NET Provider, the included application supports both the desktop versions of QuickBooks and QuickBooks Online Edition. Getting the Contacts Step 1: Google accounts include a number of contacts. To obtain a list of a user's Google Contacts, issue a query to the Contacts table. For example: SELECT * FROM Contacts. Step 2: QuickBooks stores contact information in multiple tables. Depending on your use case, you may want to synchronize your Google Contacts with QuickBooks Customers, Employees, Vendors, or a combination of the three. To get data from a specific table, issue a SELECT query to that table. For example: SELECT * FROM Customers Step 3: Retrieving all results from QuickBooks may take some time, depending on the size of your company file. To narrow your results, you may want to use a filter by including a WHERE clause in your query. For example: SELECT * FROM Customers WHERE (Name LIKE '%James%') AND IncludeJobs = 'FALSE' Synchronizing the Contacts Synchronizing the contacts is a simple process. Once the contacts from Google and the customers from QuickBooks are available, they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete contacts in either data source as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for QuickBooks V3, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the QuickBooks ADO.NET Data Provider V3, which can be obtained here.

    Read the article

  • Batch processing multi-TIFF in Irfan view

    - by hemalshah
    I have to convert DPI of more than 5k Tiff images on a monthly basis from 200x200 to 100x100. I can do that in Irfan view using a .bat file that i have created.. the following is the .BAT file code @"c:\program files\irfanview\i_view32.exe" "e:\batch1*.tif /aspectratio /resample /tifc=4 /dpi=(100,100) /convert=e:\batch2*.tif" %* Where tifc=4 is Fax 4 compression However, the above code doesn't help me change the DPI for other pages except for Only the first page in the tiff thats getting converted to 100 DPI. Rest all pages are still 200 DPI. I am using WinXP Professional and Irfan View. Can anyone tell me what I am missing. Or any other alternative program where I can create a .bat file and run the batch process using Command line?

    Read the article

  • Big project layout : adding new feature on multiple sub-projects

    - by Shiplu
    I want to know how to manage a big project with many components with version control management system. In my current project there are 4 major parts. Web Server Admin console Platform. The web and server part uses 2 libraries that I wrote. In total there are 5 git repositories and 1 mercurial repository. The project build script is in Platform repository. It automates the whole building process. The problem is when I add a new feature that affects multiple components I have to create branch for each of the affected repo. Implement the feature. Merge it back. My gut feeling is "something is wrong". So should I create a single repo and put all the components there? I think branching will be easier in that case. Or I just do what I am doing right now. In that case how do I solve this problem of creating branch on each repository?

    Read the article

  • Latest News on Service, Field Service and Depot Repair Products

    - by LuciaC
    Service and Depot Repair Customer Advisory Boards (CAB) In November 2012 the Service and Depot Repair CAB joined together for a combined meeting at Oracle HQ in Redwood Shores, California to discuss all the latest news in the Oracle Service, Field Service and Depot Repair products.  Over four days attendees shared their experiences with implementing and using these EBS CRM products and heard details of recent enhancements and future product plans direct from Development. You can access all the Oracle presentations via Doc ID 1511768.1.  Here are just some of the highlights: Field Service: Next Generation Dispatch Center Endeca Integration Case Study: Oracle Sun Field Service implementation. Mobile Field Service: New capabilities for technician-facing applications Service: Integration with Oracle Projects New Teleservice enhancements Spares Management: Supplier Warranty External Repair Execution Oracle Knowledge (Inquira) Introduction for Service Organizations If you weren't at the CAB, take a look at these presentations for great information about what's new and what's coming up in these products. 12.1.3++ Features for Field Service, Mobile Field Service, Spares Management, FSTP & Advanced Scheduler In June 2012 the R12.1.3++ patches were released for Field Service, Mobile Field Service, FSTP and Advanced Scheduler.  These patches contain new and updated functionality for these CRM Service suite modules.  New functionality includes: Field Service/FSTP/MFS: Support for Transfer Parts across subinventories in different organizations Validation to ensure Installed Item matches Returned Item MFS Wireless - Support fro Special Address Creation MFS Wireless - Enhanced Debrief Flow Advanced Scheduler Scheduler UI - Display of Spares Sourcing Information Auto Commit (Release) Tasks by Territory Dispatch Center UI - Display Spare Parts Arrival Information Spares Management Enhancements to the Task Reassignment Process Enhancements to the Parts Requirements UI Supply Chain Enhancements to allow filtering of ship methods from source location by distance. Check the following notes for more details and relevant patch numbers:Doc ID 1463333.1 - Oracle Field Service Release Notes, Release 12.1.3++Doc ID 1452470.1 - Field Service Technician Portal 12.1.3++ New FeaturesDoc ID 1463066.1 - Oracle Advanced Scheduler Release Notes, Release 12.1.3++ Doc ID 1463335.1 - Oracle Spares Management Release Notes, Release 12.1.3++ Doc ID 1463243.1 - Oracle Mobile Field Service Release Notes, Release 12.1.3++

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Here's your chance: MOS Feedback Sessions @OOW

    - by cwarticki
    Bring your questions, comments, concerns, opinions, recommendations, enhancement requests and any emotional outbursts!   As I travel the world and speak to thousands of customers, I receive plenty of feedback about My Oracle support.  Come hear directly from the source. Meet Dennis Reno, VP of Customer Portal Experience. The Customer Portal Experience team will host a My Oracle Support Tips and Techniques session and three roundtable feedback sessions at this year’s Oracle OpenWorld. The sessions will include a Hardware Support component, as well as best practices that are sure to benefit all My Oracle Support users. The events planned will give our users the opportunity to learn more about how the My Oracle Support customer portal adds value to the support process and to their business needs. The roundtable feedback sessions will allow customers to meet, give feedback, and share their experiences directly with the team responsible for the customer portal experience. Date Time (PT) Session Name Mon, Oct 1 01:45 PM My Oracle Support: Tips and Techniques for Getting the Best Hardware Support Possible (Session #CON9745) Tue, Oct 2 11:00 AM Roundtable - My Oracle Support General Feedback Wed, Oct 3 11:00 AM Roundtable - My Oracle Support Community Feedback Thr, Oct 4 11:00 AM Roundtable - My Oracle Support General Feedback Customers can find more information, including specific details about how to attend, by accessing My Oracle Support at OpenWorld (Article ID 1484508.1). Enjoy OpenWorld everyone! -Chris Warticki Global Customer Management

    Read the article

  • Install proprietary drivers 14.04 NVIDIA (steam segmentation issue)

    - by allthosemiles
    Recently, I finally got the official drivers for my NVIDIA 560 Ti card installed on Ubuntu 14.04 (hooray) However I started looking into installing Steam and I'm getting segmentation errors when I try to run the software. I tried installing 32-bit libs and it seemed like they weren't available or they were already installed. Upon further investigation, I found that a solution is to install the proprietary drivers, install steam then switch back to the other drivers. I'm not really sure what "proprietary drivers" are in all honesty. Has anyone gone through this process that could provide some insight here? (I installed the official 64-bit driver from the NVIDIA site for my 560 Ti just for reference. And the Ubuntu version installed is 64-bit as well) Update: This is the error text I get when trying to run steam after installing it via the ubuntu store. Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 3943 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete! Restarting Steam by request... Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME has been set by the user to: /home/dbrewer/.steam/ubuntu12_32/steam-runtime Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 4066 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" What I get when I run "steam --reset" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete!

    Read the article

  • Windows: Running an AutoIt script to launch a GUI app - on a server, when no one is logged in

    - by mrled
    I want to run an AutoIt script every day at 1:00 AM on a Windows 2003 Server Standard Edition. Since this is a server, obviously there is rarely someone sitting there logged in at the console, so the procedure needs to account for this. The AutoIt script in question launches and sends keypresses to a GUI app, so the process needs to include creating some sort of session for the user running the schedule task. Is there a way to do this? I can't just use scheduled tasks run the AutoIt script when no one is logged in - if I do, it fails to launch at all. I thought that I might be able to create an RDP session and run the scheduled task as that user, inside that session, but I haven't found a way to create an RDP session without launching mstsc.exe -- which is itself a GUI app, and I have the same problem again.

    Read the article

  • Flame Experiments Aboard the ISS Yield Surprising Results

    - by Jason Fitzpatrick
    Recent flame-based experiments aboard the International Space Station yielded results scientists simply thought couldn’t happen–combustion in microgravity is a curious thing. Smithsonian magazine reports on the findings: Here on Earth, when a flame burns, it heats the surrounding atmosphere, causing the air to expand and become less dense. The pull of gravity draws colder, denser air down to the base of the flame, displacing the hot air, which rises. This convection process feeds fresh oxygen to the fire, which burns until it runs out of fuel. The upward flow of air is what gives a flame its teardrop shape and causes it to flicker. But odd things happen in space, where gravity loses its grip on solids, liquids and gases. Without gravity, hot air expands but doesn’t move upward. The flame persists because of the diffusion of oxygen, with random oxygen molecules drifting into the fire. Absent the upward flow of hot air, fires in microgravity are dome-shaped or spherical—and sluggish, thanks to meager oxygen flow. “If you ignite a piece of paper in microgravity, the fire will just slowly creep along from one end to the other,” says Dietrich. “Astronauts are all very excited to do our experiments because space fires really do look quite alien.” Hit up the link below for the full article including how NASA is applying the findings. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • Releasing an open source project without getting embarrassed

    - by Hopeful
    I've been working by myself on a fairly large open source project for quite a while and it's nearing the point where I'd like to release it. However, I'm self-taught and I don't really know anyone who could adequately review my project. A few years ago, I had released a small bit of code which pretty much got ripped apart (in a critical sense) on the forum where I released it. Even though the code worked, the criticism was accurate but brutal. It prompted me to begin searching for best practices for everything and in the end I feel that it made me a much better developer. I've gone over everything in my project so many times trying to make it perfect that I've lost count. I believe in my project and think it has the potential to help a lot of people and I feel like I've done some cool things in interesting ways with it. Still, because I'm self-taught, I can't help but wonder what gaps exist in my self-education. The way my code was ripped apart last time isn't something I'd like to repeat. I think my two biggest fears with releasing my project that I've poured countless hours into are being absolutely embarrassed because I missed some patently obvious things because of my self-education or, worse, releasing it to the sound of crickets. Is there anyone who has been in a similar situation? I'm not afraid of constructive criticism, so long as it is constructive and not just a rant on how I screwed up. I know there is a code review site on StackExchange, but it's not really set up for large projects and I didn't feel like the community there is large enough yet to get good feedback if I were to post parts of my project piecemeal (I tried with one file). What can I do to give my project at least some measure of success without getting embarrassed or devestated in the process?

    Read the article

  • Developing a feature which sole purpose to be taken out?

    - by adib
    What is the name of the pattern in which individual contributors (programmers/designers) developed an artifact for the sole purpose is to serve as a diversion so that management can remove that feature in the final product? This is a folklore I heard from an ex-colleague who used to work at a large game development company. At that company, it is well known that middle management is pressurized to "give inputs" and "make changes" to the product otherwise they risk being seen as not contributing to the project. This situation have delayed many projects because of these superfluous "management inputs". In one project at the above company, the artists and developers created a supernumerary animated character that appears in every cutscene and sticks out like a sore thumb. They designed it in such a way that it can be easily removed before the game is shipped (this was when games were still sold in physical media and not a downloadable product). Obviously the management then voted to remove the animation. On the positive side, management didn't introduced any unnecessary changes that would have delayed the project because they have shown that they provided constructive inputs to the product. This process pattern has a name among game programmers that work in corporates, but I forgot what was the actual name. I believe it's duck-something. Anybody can help pointing out the name and perhaps some rather credible reference to how the pattern develops?.

    Read the article

  • links for 2010-06-04

    - by Bob Rhubart
    @biemond: JEJB Transport and manipulating the Java Response in OSB 11g "JEJB Transport works like the EJB Transport," says Oracle ACE Edwin Biemond, "but the request and response objects are not translated to XML so you can't use XQuery etc. To make things not too hard, OSB 11g makes a XML presentation of the request method and its parameters, which you can use in the Proxy Service." (tags: oracleace soa oracle jejb java) @bex: Oracle UCM jQuery Plugin  "This connector allows you to use jQuery to make UCM Service calls through AJAX, and easily display the results,: says Oracle Ace Director Bex Huff. "This is 100% pure JavaScript, no Java, Idoc, or ADF required!" (tags: oracleace ucm oracle otn enterprise2.0) Oracle Solaris Studio Express 6/10 and its Customer Feedback Program are now available (Oracle Developer Tools Blog) "Oracle Solaris Studio Express 6/10 is available on Solaris 10 (SPARC, x86), OEL 5 (x86), RHEL 5 (x86), SuSE 11 (x86) today and will be available for OpenSolaris in the near future," says Pieter Humphrey. (tags: oracle otn solaris sparc liunux) @soatoday: EA and SOA Should Report to COO "So, who gets EA-- the CIO or VP of a Business? I argue neither! After all, a typical EA goal is to connect the Business and IT together to impart better structure and visibility across the enterprise. I firmly believe that neither should own EA so that neither imparts too much of their organization (i.e bias) on the EA process and deliverables. EA needs to be independent, and it's for all the right reasons." -- Orace ACE Director JOrdan Braunstein (tags: oracleace entarch soa)

    Read the article

  • A better way to encourage contributions to OSS

    - by Daniel Cazzulino
    Currently in the .NET world, most OSS projects are available via a NuGet package. Users have a very easy path towards *using* the project right away. But let’s say they encounter some isssue (maybe a bug, maybe a potential improvement) with the library. At this point, going from user to contributor (of a fix, or a good bug repro or even a spike for a new feature) is a very steep and non trivial multi-step process of registering with some open source hosting site (codeplex, github, bitbucket, etc.), learning how to grab the latest sources, build the project, formulate a patch (or fork the code), learn the source control software they use (mercurial, git, svn, tfs), install whatever tools are needed for it, read about the contributors workflow for the project (do you fork &amp; send pull requests? do you just send a patch file? do you just send a snippet? a unit test? etc.), and on, and on, and on. Granted, you may be lucky and already know the source control system the project uses, but in really, I’d say the chances are pretty low. I believe most developers *using* OSS are far from familiar with them, much less with contributing back to various projects. We OSS devs like to be on the cutting edge all the time, ya’ know, always jumping on the new SCC system, the new hosting site, the new agile way of managing work items, bug tracking, code reviews, etc. etc. etc.. But most of our OSS users are largely the “... Read full article

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >