Search Results

Search found 14122 results on 565 pages for 'cable management'.

Page 458/565 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Very slow printing from print server

    - by evolvd
    Print server is a VM on Xen The VM is Windows 2003 32bit. During the issue the VM is not being taxed in anyway, cpu, memory, hd read/write, and network speed is all good. The problem that I see is the transfer of the print file from the print server to the printer. The 80Mb file is transferred from the client to the print server in about 2 minutes but then it takes about 2 hours for that file to be sent to the printer. I can't figure out why this would just start to happen. The printer is rebooted every evening and is just used for one large print job in the morning. The server has been rebooted with no effect I changed the spool option to send the entire spool to the server before printing starts and it had no effect. This printer problem did happen to come about after some changes to the Xen environment. The Xen servers changed from using HBA NIC cards to software iscsi and a new switch was put in. I don't think this is related to the problem since all the speeds on the VMs are better now. The changed happened on Saturday and the first print to this printer happened on Monday morning. I'm just putting that out there but like I said I don't think it is related but I don't want to rule it out. At this point I don't have many other options besides the physical layer. I can switch out network cable that goes to the printer and I might be able to print the same job to another printer. I wont be able to test those things out till this afternoon though. Any other ideas or test I could do to try to find the reason for the slow speed? I forgot to say that this is only happening when printing to this one printer. ===Update=== I found out that there are a few printers that currently have this issue, not just the one. There are over 30 printers on the server though so I know it's not happening to all of them. I printed a large pdf doc from the server and it was able to print at the normal speed. If the machine sends the large print request it gets to the server fine but then slow to get from the server to the printer. If sent directly from the printer it gets to the printer at the normal speed. The question now is why is there a speed difference when it comes from the machine and why would it start now?

    Read the article

  • Texture allocations being doubled in iPhone OpenGL ES

    - by Kyle
    The below couple lines are called 15 times during initialization. The tx-size is reported at 512 everytime, so this will allocate a 1mb image in memory 15 times, for a total of 15mb used.. However, I noticed instruments is reporting a total of 31 allocations! (15*2)+1 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tx-size, tx-size, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); Likewise in another area of my program that allocates 6 256x256x4 (256kB) textures.. I see 13 sitting there. (6*2)+1 Anyone know what's going on here? It seems like awful memory management, and I really hope it's my fault. Just to let everyone know, I'm on the simulator.

    Read the article

  • Full text searching in SQL Server 2008 Express Advanced

    - by Iain Macleod
    Hi, I have recently installed SQL Server 2008 Express Edition with Advanced Services on XP Pro but am having trouble getting full text searching to work with an restored database. The database was originally created in SQL Server 2005. When I call a stored proc that uses the full text index then I get the following error: Full-Text Search is not installed, or a full-text component cannot be loaded. This is my db version: Microsoft SQL Server 2008 (RTM) - 10.0.1600.22 (Intel X86) Jul 9 2008 14:43:34 Copyright (c) 1988-2008 Microsoft Corporation Express Edition with Advanced Services on Windows NT 5.1 (Build 2600: Service Pack 3) When I run: SELECT DATABASEPROPERTY('DBNAME','ISFULLTEXTENABLED') I get: 1 Also, when I look in the advanced properties for the db server in Management Studio I see both the "Default Full-Text Language" and "Full-Text Upgrade Option" properties. However, when I go to SQL Server Configuration Manager I don't see the "MSSQLFDLauncher" service. Does anyone know how to get this working? Cheers, Iain

    Read the article

  • GetDiskFreeSpaceEx reports wrong number of free bytes

    - by rboorgapally
    __int64 i64FreeBytes unsigned __int64 lpFreeBytesAvailableToCaller, lpTotalNumberOfBytes, lpTotalNumberOfFreeBytes; // variables used to obtain // the free space on the drive GetDiskFreeSpaceEx (Manager.capDir, (PULARGE_INTEGER)&lpFreeBytesAvailableToCaller, (PULARGE_INTEGER)&lpTotalNumberOfBytes, (PULARGE_INTEGER)&lpTotalNumberOfFreeBytes); i64FreeBytes = lpTotalNumberOfFreeBytes; _tprintf(_T ("Number of bytes free on the drive:%I64u \n"), lpTotalNumberOfFreeBytes); I am working on a data management routine which is a Windows CE command line application. The above code shows how I get the number of free bytes on a particular drive which contains the folder Manager.capdir (it is the variable containing the full path name of the directory). My question is, the number of free bytes reported by the above code (the _tprintf statement) doesn't match with the number of free bytes of the drive (which i check by doing a right click on the drive). I wish to know if the reason for this difference?

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you. Thank you Anees Bakrain Only the ATI Radeon HD 4300/4500 Series adapter is displayed in the Device Manager, for that reason I have to assume that the onboard display adaptor is not active. All 40 drivers of Sarastro2 are up to date and the HDMI cable can not be the problem because both monitors displayed the boot sequence up to the moment when Windows 7 was loaded completely. This was the moment, when the Asus monitor lost its signal. Both connectors, HDMI and DVI are connected and removing the DVI connector would not solve my problem of running both monitors simultaneously. However, your suggestions shifted my seventy one year old brain into the next gear. The only question remaining is; “Why the signals to the Asus monitor stop after the sequence is complete”. The ATI Radeon HD 4300/4500 Series adapter seems to be capable of sending simultaneous HDMI and DVI signals, what is done during the boot sequence. Why do the signals change after the boot sequence is complete is the key question or der springende Punkt? Is this a correct assumption slhck?

    Read the article

  • Need a creative machine name suggestions for dev machine.

    - by Jay
    So.. I have a windows machine running a dev-db server (oracle) , svn server (visual svn) and a project management tool (redmine). I need suggestions for a good host name for this machine, which is very easy-to-remember and sounds creative. Would love to hear from your experiences, for inspiration :) Here is what is on my mind right now: (xyz being the project name) < xyz >forge < xyz >labs Need more on these lines. Thanks for all your help.

    Read the article

  • What next in the career map for a Lead QA Engineer

    - by chandran
    I am a Lead QA Engineer in a Software company and at a stage in my career wherein i need to plan my next move. Option 1: The very obvious move would be to stay as a QA Lead and eventually become a QA Manager. But i don't see very good prospects/future after that. Or am i wrong? Option 2: I love programming/coding, though i haven't spent a whole lot of time on that. So a direct move to becoming a Software Developer is not possible. Will moving to Test Automation eventually lead me to development. Even so, am i looking at step-down in pay and career-level. Option 3: Moving to Product Management. Is this even possible and if so what would be the best approach. Appreciate all your responses in advance. Thanks.

    Read the article

  • XP shared folders not accessible after BIOS changed

    - by stijn
    Here's what worked for over a year: PC A runs Windows 7, PC B runs Windows XP. Both are on the same subnet behind a router. A uses user account X, but logs in to PC B using the Administrator account. PC B is a Dell Precision 470. A known problem with these is that sometimes when plugging in their power cable they somehow loses all BIOS settings. This happened yesterday. After this happens Windows won't boot, because the default BIOS setting is 'RAID ON' while there is no RAID configured. No problem though, changing the BIOS settings to 'RAID OFF' makes it boot without problems. Note that in the meantime, nothing config-related was changed on machine A. It wasn't even on. Indeed after doing this, everything is fine. Everything includes all normal operations, remote desktop from PC A to PC B, running Synergy between A and B, accessing shared folders from B to A. But accessing the shared folders on B from A does not work any more. I tried pretty much everything I found via Google (fiddling with policies/registry kes/...) but no avail. > ping -a 192.168.2.2 Pinging A [192.168.2.2] with 32 bytes of data: Reply from 192.168.2.2: bytes=32 time<1ms TTL=128 > net view \\192.168.2.2 System error 5 has occurred. Access is denied. > net use /persistent:no K: \\A\myshare /user:A\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:192.168.2.2\USERNAME PASSWORD > net use /persistent:no K: \\192.168.2.2\myshare /user:USERNAME PASSWORD System error 86 has occurred. The specified network password is not correct. A solution to this would be great: I haven't been able to do any work since yesterday ;] update after taking the hard drive out of B and putting it in another Precision 470 with almost exactly the same hardware (at first sight, only the video card differs) the shared folders work.. Putting the disk back into A, same problem remains. Why does this depend on hardware, and more important, on which hardware?

    Read the article

  • Freelancer's problem and legal actions

    - by user198003
    Hi, Last year, I worked as a free lancer on one project. Unfortunately, person I worked for, decided that he is not happy about my work, and he decided to fired me. After 7 months, he called me, and ask to me return "his" money. In any other case, he will sue me. His version is that I have to give him 1500 euros, but my version is that he own me another 1500. I have no contract, only emails and Excel file with counted hours. What do I have to do? I don't want to give him back 1500, because it was my work, and his bad management. Also, I do not want my 1500, because I think it's not fair from my side. What should I do?

    Read the article

  • Storing an image map in a database

    - by ColoradoRockie
    I have a requirement to allow users in a content management system to create their own image maps through a gui interface, which I have accomplished. But instead of saving the image map to the page code, I want to save the image map code to a database (sql), which I've also accomplished. When I started down this road in my head I was thinking the whole time that I'd just add the "usemap" attribute at runtime shown below where promo1.ImageMap holds the entire map code: if(promo1.HasImageMap) imgPromotion1.Attributes.Add("usemap", promo1.ImageMap); I guess I didn't think it though well enough, because it seems that "usemap" only expects the name of the existing map to use from the page code, and not the map code as a string. Does anyone have any clever ideas on how to apply the map from the database to the image at run time?

    Read the article

  • Why does unpartitioned Hitachi HDS5C3020 drive start consuming 50% more power 15 minutes after boot?

    - by Pro Backup
    In a Debian 6.0.6 system there are 74 pieces of 2TB Toshiba DT01ABA200 drives. These drives are identified as Hitachi HDS5C3020BLE630 drives running firmware revision MZ4OAAB0. 64 Drives attached via HP SAS expander cards to an LSI 2008 SAS controller, another 5 drives are connected directly to the mainboard, 4 drives are connected to a Sil based PCI controller and last 1 drive is only powered and has no data cable connected. The controller LSI and Sil card's their onboard BIOS are both disabled and the mpt2sas and sata_sil modules are removed from the Linux debian 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux kernel. The mpt2sas module is loaded after boot using a modprobe command in /etc/rc.local. These 74 drives are not partitioned, neither formatted and also not mounted. The system consumes: with 0 drives: 70.6 - 70.9 Watt (also 15 minutes after boot); with 74 drives: 330 - 360 Watt, just after boot (is equivalent to 3.5 - 3.9W per drive in idle state); with 74 drives: 420 - 466 Watt, each time in the 15th minute of uptime (is equivalent to 4.7 - 5.3W per drive in idle state). The drive specification lists 4.7W as read/write, and 3.3W as idle power consumption. The increased power consumption is most likely on the 5V line, because after roughly 1 minute an "over current protection" (OCP) of the power supply (PSU) shuts down the power. The used PSU is a single rail model with an OCP of 122A on the 12V line and 55A on the 5V line. Regression: It doesn't matter whether the drive its APM value is set to disabled or 1 (maximum power saving). The operating system records no read/write activity in /proc/diskstats. The values there are identical (28 read, 0 write operations) as immediately after the modprobe operation. Can't test what happens when booting into the mainboard it's BIOS - to exclude any OS intervention - because the Super Micro X8SI6-F mainboard running firmware 06/27/12 has a bug that incorrectly reads a +74.0 C CPU sensor temperature as "High" in BIOS mode, and shuts down the power after 1 minute. What might be causing the drive read/write activity on all drives in the 15th minute after boot and how to prevent it from happening?

    Read the article

  • How do you send email invites to people who have been invited by users of your website?

    - by Arpit Rai
    We've developed a web application where people can sign-up on our website to make use of our service. We have a functionality that allows users to send invites to their friends by looking up their contacts on Gmail, Yahoo Mail etc. My question is - do we have to use a 3rd party email management software like a MailChimp or SendGrid to send such emails or should we send them directly? If we send the emails directly and if the recipients start marking those emails as spam, isn't there a very high chance that we might get banned by Gmail, Yahoo etc.?

    Read the article

  • NuGet - managing and removing mutil version packages in single solution

    - by Myles McDonnell
    SCENARIO One VS solution with n projects. Project A references package Y v1, Project B references package Y v2. It is now not possible to update all references to package Y for all projects in the solution using the NuGet package manage dialog at the solution level, it is only possible to do this when all projects reference the same version of package Y. Not a big deal for only two projects, but I'm dealing with lots of projects that through poor package management are referencing many package versions when they should all reference the same version. Before I spend the afternoon writing a console app. to auto update all package.config files for a solution so that each referenced package is only referenced via it's latest version (latest referenced, not the very latest, with exceptions/caveats etc)....is there a tool/method for doing this already? Or some other approach I am unaware of?

    Read the article

  • How to schedule daily backup in SQL Server 2008 Web Edition

    - by Xenon
    In SQL Server Management Studio I created a maintenance plan but it won't work Error is; "Message Executed as user: LITESPELL-19C34\Administrator. Microsoft (R) SQL Server Execute Package Utility Version 10.0.1600.22 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. The SQL Server Execute Package Utility requires Integration Services to be installed by one of these editions of SQL Server 2008: Standard, Enterprise, Developer, or Evaluation. To install Integration Services, run SQL Server Setup and select Integration Services. The package execution failed. The step failed." But in Microsoft page http://www.microsoft.com/sqlserver/2008/en/us/web.aspx in Automate tasks and policies section it is written that backup can be scheduled in this edition but how?

    Read the article

  • Best way to auto-restore db on an houlry basis

    - by aron
    Hello, I have a demo site where anyone can login and test a management interface. Every hour I would like to flush all the data in the SQL 2008 Database and restore it from the original. Rae Gate sql has some awesome tools for this, however they are beyond my budget right now. Could I simply make a backup copy of the database's data file, then have a c# console app that deletes it and copies over the original. Then I can have a windows schedule task to run the .exe every hour. It's simple and free... would this work? I'm using SQL Server 2008 R2 Web edition I understand that red gate is technically better because I can set it to analyze the db and only update the records that were altered, and the approach I have above is like a "sledge hammer".

    Read the article

  • Skinning WinAPI Controls

    - by Brad
    If you've ever seen an application in the Adobe Creative Suite 5 (CS5), you may have noticed that it doesn't look like the native Windows GUI.. They have modified it to have a different look to it. Where would someone begin to make an application that has a custom skin? CS5 uses the Adobe Source library for it's widget/control management, so I tried downloading and compiling the Adobe Source Library to see if I could make a nice skinned app like Photoshop CS5, but after finally getting it to compile and tested it, I realized the library was only for managing widgets and not skinning the GUI, like CS5 has. Where would I begin to make a nice skinned program like Adobe Cs5 applications? Can anyone point me in the right direction? Do I simply use the WM_PAINT Message from WinAPI and render my own widgets using openGL or something?

    Read the article

  • Change write-host output color based on foreach if elseif outcome in Powershell

    - by Emo
    I'm trying to change the color of write-host output based on the lastrunoutcome property of SQL Server jobs in Powershell....as in...if a job was successfull, the output of lastrunoutcome is "Success" in green....if failed, then "Failed" in red. I have the script working to get the desired job status...I just don't know how to change the colors. Here's what I have so far: # Check for failed SQL jobs on multiple servers [reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null foreach ($svr in get-content "C:\serverlist2.txt") { $a = get-date $BegDate = (Get-Date $a.AddDays(-1) -f d) + " 12:00:00 AM" $BegDateTrans = [system.datetime]$BegDate write-host $svr $srv=New-Object "Microsoft.SqlServer.Management.Smo.Server" "$svr" $srv.jobserver.jobs | where-object {$_.lastrundate -ge $BegDateTrans -and $_.Name -notlike "????????-????-????-????-????????????"} | format-table name,lastrunoutcome,lastrundate -autosize foreach ($_.lastrunoutcome in $srv.jobserver.jobs) { if ($_.lastrunoutcome = 0) { -forgroundcolor red } else {} } } This seems to be the closest I've gotten...but it's giving me an error of ""LastRunOutcome" is a ReadOnly property." Any help would be greatly appreciated! Thanks! Emo

    Read the article

  • Specialization hierarchy in a domain-model

    - by devoured elysium
    I'm trying to make the domain model of a management system. I have the following kinds of persons in this system: employee manager top mananger I decided to define a User, from where employee, manager and top manager will specialize from. What I don't know is what kind of specialization hierarchy I should choose from. I thought of two ways: or Which might be preferable and why? As a long time coder, every time I try to do a domain-model, I have to fight against the idea of trying to think in how I'm going to code this. From what I've understood, I should not think about those matters in the domain-model, only in object relationships. I don't have to think of code duplication or any of these kind of details here, so I can't really pick any of the options over the other. Thanks

    Read the article

  • How do you energize yourself when working alone on a project?

    - by Stephane
    I am working in an environment with a very small team (3 developers only) and each of us have been assigned a different project, without counting support tasks. I know this is a bad business practice and that we should all work on a single project at a time, and then move on to the next one (Already explained to the management on how much it sucks). So don't answer me that we should work all together on one project at a time. Energizing the work when in a team is mostly pair programming we did that when less project were thrown at us and that was great. What I would like to know is how you energize your work when working alone on a project. Do you follow any particular practice?

    Read the article

  • How does SQL Server treat statements inside stored procedures with respect to transactions?

    - by Sleepless
    Hi All! Say I have a stored procedure consisting of several seperate SELECT, INSERT, UPDATE and DELETE statements. There is no explicit BEGIN TRANS / COMMIT TRANS / ROLLBACK TRANS logic. How will SQL Server handle this stored procedure transaction-wise? Will there be an implicit connection for each statement? Or will there be one transaction for the stored procedure? Also, how could I have found this out on my own using T-SQL and / or SQL Server Management Studio? Thanks!

    Read the article

  • What should I consider when deploying a new web farm?

    - by tsilb
    My web app has been chugging along just great in Production for years with one App server and one Web server. Now we're moving to a multi-server environment with 2 App and 3 Web servers. I have enough time to make changes before the go-live. As a Developer, what considerations should I take into account from coding, deployment, and architectural/ecosystem management perspectives? Already on my list: Remove tight-coupling between servers Applicable files (i.e. downloadables) stored in IMAGE fields in SQL instead of files on app server Deployment: Take out node out of the farm at a time

    Read the article

  • iPad crashes that aren't happening on iPhone or iPod Touch

    - by alyoshak
    Has anyone had difficulty getting what has otherwise been a solid iPhone app working on the iPad? I was under the impression that iPhone apps would run without problems on the iPad. We are are experiencing crashes (not intermittent - same place, at same time) that we've never gotten on the iPhone or iPod Touch. I have become suspicious that the crashes are memory-management related, but even if so, why only on the iPad? 2010-05-17 10:19:06.474 ASSIST[82:207] *** Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<UISectionRowData 0x6041480> valueForUndefinedKey:]: this class is not key value coding-compliant for the key deliveryDate.' 2010-05-17 10:19:06.481 ASSIST[82:207] Stack: ( 852041337, 861292157, 852040861, 850755255, 850750995, 850758945, 81279, 123007, 126693, 149141, 851599725, 827486573, 827486477, 827486431, 827485745, 827487359, 827454123, 851903137, 851590065, 851588321, 819339483, 819339655, 827151561, 827144691, 9461, 9324 ) terminate called after throwing an instance of 'NSException' Program received signal: “SIGABRT”.

    Read the article

  • Subversion for version control

    - by Gabriel Parenza
    Hi, I am working on an application whose primary purpose would be to provide source control management. My idea is to use to SVNKit for file check-out and check-in. However, while working with SVNKit, I realised it does not have the speed I was looking for. For instance, whenever developers create a ChangeRequest, which can encompass change in 3-40 files, I have to create a directory structure distributed across 32 folders. Doing so takes around 50 seconds, Another instance is that after creating change request developers can add files to the request. Copying even a single file from Trunk to branch takes around 6-7 secs. My question is has anyone had experience like this and what did you do to improve the performance? Moreover, is my approach correct? NOTE: I am using "http" protocol and can't use "svn" protocol.

    Read the article

  • Mercurial setup: One central repo or several?

    - by Robert S.
    My company is switching from Subversion to Mercurial. We're using .NET for our product. We have a solution with about a dozen projects that are separate modules with no dependencies on each other. We're using a central repo on a server with push/pull for our integration build. I'm trying to figure out if I should create one central repo with all the projects in it, or if I should create a separate repo for each project. One argument for separate repos is that branching the individual modules would be easier, but an argument for a single repo is easier management and workflow. I'm very new to hg and DVCS, so some guidance is greatly appreciated.

    Read the article

  • SQL Server 2008 - Script Data as Insert Statements from SSIS Package

    - by Brandon King
    SQL Server 2008 provides the ability to script data as Insert statements using the Generate Scripts option in Management Studio. Is it possible to access the same functionality from within a SSIS package? Here's what I'm trying to accomplish... I have a scheduled job that nightly scripts out all the schema and data for a SQL Server 2008 database and then uses the script to create a "mirror copy" SQLCE 3.5 database. I've been using Narayana Vyas Kondreddi's sp_generate_inserts stored procedure to accomplish this, but it has problems with some datatypes and greater-than-4K columns (holdovers from SQL Server 2000 days). The Script Data function looks like it could solve my problems, if only I could automate it. Any suggestions?

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >