Search Results

Search found 30804 results on 1233 pages for 'hardware test'.

Page 41/1233 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Should I enabled 802.3x hardware flow control?

    - by Stu Thompson
    What is the conventional wisdom regarding 802.3x flow control? I'm setting up a network at a new colo and am wondering if I should be enabling it or not. My oh-cool-a-bright-and-shiny-new-toy self wants to enable it, but this seems like one of those decisions that could blow up in my face later on. My network: An HP ProCurve 2510G-24 switch A pair of Debian 5 HP DL380 G5's with built-in NC373i 2-port NIC LACP'd as one link. 9000 jumbo frames enabled. (Application) A pair of hand-built Ubuntu server with 4-port Intel Pro/1000 LACP'd as one link. 9000 jumbo frames enabled. (NAS) A few other servers with with single 1Gbps ports, but one with 100Mbps. Most of this kit is 802.3x. I've been enabling it as I go along, and am about to test the network. But as my 'go live' day nears, I am worried about the 802.3x decision as I've never explicitly used it before. Also, I've read some 10-year old articles out there on the Intertubes that warn against using flow control. Should I be enabling 802.3x hardware flow control?

    Read the article

  • Hardware doesn't work some times

    - by Ali
    I have an Asus n51vf laptop. I bought it not long time ago (less than a month) and installed Windows 7 on it. Since that it wasn't new when I bought it, I don't know, if the problem was occuring before re-installing Windows 7. There are two problems, that might (or might not) have something to do with each other: Problem The less annoying problem is that the screen flickers ONLY on YouTube when scrolling and/or pausing/playing the video or moving the curser over the video. The second more timeconsuming problem is that some of the hardware doesn't work when I turn up the computer. Then I have to reboot the computer and try again. Example: I turn on the computer, and the touchpad doesn't work - I then reboot and it works again. This has so far happened to the screen (leading to a black screen), the keyboard and the touchpad. More frequently the mouse. What have I tried to do so far? I tried reeinstalling the touchpad driver (Synaptic Touchpad driver from Asus Support Page) - didn't help at all. I even tried reinstalling the driver when the non-responding mouse on startup-problem occured (using the keyboard), but it asked me to reboot and didn't help at all. I also installed (and reinstalled) the graphics driver; it's a GeForce GT 130M. Now, the problem never occurs spontaneously - it only appears when booting my computer. I can even use the computer for heavy tasks (games, multiple programs running etc.) with no problem at all. I suspect, that the problem is in some booting mechanism that fails: Maybe the Windows driver loading? Motherboard problems?

    Read the article

  • Error - "IR Hardware not detected" - but it's installed/working

    - by Robert
    I am trying to do: Settings-TV-Set up TV signal. During this process I am getting the error "IR Hardware not detected." With the remote, I can select the "try again" button (to re-detect) and it tries again, so the remote works. Plugging in the "IR blaster" doesn't change anything. (I wouldn't expect any difference, but I read a post which said you needed that. I will get Media Center to change channels if I can get that working - but first things first.) I was able to do the setup months ago when I had cable. and everything was fine. I just got DirecTV. (BTW - During the above process, Media Center detects the signal coming in on channel 3. Windows XP Media Center SP3. The TV Tuner card is a Pinnacle TCTV HD PCI. Everything - and I mean everything - has the latest firmware and drivers - as of 4 months ago when I fixed a different problem. So I DON"T WANT TO HEAR the standard answer to check drivers/firmware. THANK YOU.) Thanks for any help.

    Read the article

  • Hardware issue: computer plays beep sounds at startup, and then turns off

    - by Darkkurama
    Turns out that after moving, my computer got damaged somehow. I installed my computer 2 days ago. I didn't check the state of the hardware parts and turned the pc on recklessly, not realizing that the cpu fan was dettached from the cpu. After this I reattached the fan. The computer worked kind of faulty after that, because at the time of startup, it would play a long beep (different from the normal beep) and won't display anything. I tried restarting after this incident and it got fixed (don't know how, but it did), and I was even able to play some videogames for some hours (more than 5). This morning I tried to get this issue sorted out by checking again the cpu and in general, the connections, and for my surprise, at the time of turning it on again, the computer wouldn't boot at all. I reapplied some thermal paste to the cpu and fan, but got no results (I did this because the old paste was very badly distributted along the surface of both cpu and fan). Now every time I turn it on, the computer acts randomly: sometimes at startup, it plays a long continous beep and then turns off. one time it played a long continous beep and got to the windows password input screen, but then it turned off More frequently, I turn it on and after some seconds, it turns off without any sound. You can check this video I recorded, which features how the computer behaves most of the time Another one I tried troubleshoting it myself by disconnecting my graphic card, rams, HDD and even the cpu and its fan with no result. Computer specs: Intel Core 2 Quad Q6600 processor, Nvidia xfx gtx 260 black edition, 4 GB ram, 500 GB hard drive, Windows 7 64 bit

    Read the article

  • Create a partition table on a hardware RAID1 drive with [c]fdisk

    - by Lev Levitsky
    My question is, is there a reason for this not to work? Details: I have two 500 Gb drives, and my motherboard RAID support, so I created a RAID1 array and booted from a Linux live medium. I then listed the disks and, apart from the obvious /dev/sda, /dev/sdb, etc. there was /dev/md126 which, I figured, was the mirrored "virtual" drive. Its size was 475 Gb; I had seen that the size of the array would be smaller than 500 Gb when I was creating it, so no surprise there. I did cfdisk /dev/md126, created the necessary partitions and chose write. It's been about half an hour now, I think. It doesn't seem like it's ever going to finish. The only thing about cfdisk in dmesg is that it's "blocked for more than 120 seconds". Doing fdisk -l /dev/md126 in another terminal I see all three partitions I created and a note that "Partition 1 does not start on a physical sector boundary". The table is lost after reboot, though. I tried to partition /dev/sda individually, and it worked, the table was written in about a second. The "not on a physical sector boundary" message is there, too. EDIT: I tried fdisk on /dev/sda, then there were no messages about sector boundaries. After a reboot, I am able to use mkfs on /dev/dm126p1, etc. fdisk shows that /dev/md126 has the same partitions as /dev/sda (but /dev/sdb doesn't have any). But at some point ("writing superblock and filesystem accounting information") mkfs is also blocked. Using it on sda1 results in a "partition is used by the system" error. What can be the problem? EDIT 2: I booted a freshly updated system from a pendrive and was able to create partition table and filesystems on /dev/md126 without any apparent problems. Was it an issue with the support of the hardware? My MB is Asus P9X79.

    Read the article

  • Moving the Windows 7 Pro OEM image to computer with the same hardware

    - by SWin
    We bought 8 clean computers (even without HDD) with the same hardware and bought eight Windows 7 Pro OEM disks. Now I prepare one Win7 installation without activation but with all required programs, settings, etc. Then I'm going to clone the image to other computers even without sysprepping. I'm going to change the product key to legal number at COA sticker on each computer and make the activation through the Internet. Will this scenario work? I know that OEM's license agreement forbids the image cloning and the actions I'm going to do breaks the agreement. According the license agreement I should make the manual clean install of Win7 on each computer. But how Microsoft and other viewers can determine the cloning fact? All computers are the same and license Win7 DVDs are also the same. However in my case the installation time also will the same (and may be kind of installation code or something else) and this is not good. Will the Win7 activation work? Can I be sure that activation will not damage after some time? Can Microsoft determine the cloning fact during the activation process? Thank you.

    Read the article

  • Need advise for choosing software\hardware for virtualization.

    - by Anatoly
    Currently we have these servers : Windows SBS 2003 premium on IBM X266 double Xeon F43, 2GB ram. DC, exchange (70 users), Mssql. Windows 2003 R2 32bit on IBM x3400 with double XEON E5310 and 4GB ram. Terminal server (40+ users), ERP application based on uniPaaS platform from Magicsoftware, and Pervasive sql. Ubuntu 8.04 (simple pc box) with squid proxy, GLPI system and PHPBB3 forum for internal use. Recently number of concurrent users on Terminal server passed 40 users in rush hours and it gets stuck frequently. Therefore we need an upgrade. I think about transfer all physical servers to virtual servers based on cluster of 2 physical servers for reducing downtime. I think we will grow till 50-60 concurrent terminal users in rush hours. I also plan to virtualize 10-15 Win XP/7 workstation (office,ERP etc), and there is a little probability for Asterisk\Hylafax for 100 users (if it possible on same VM). Also we need NAS storage for 2-3TB. What hardware upgrade/purchase we need for complete this task? Which VM solution is preferable VmWare or Hyper-V? What backup software should we choose? Acronis or something another? Thank you in advance.

    Read the article

  • "could not find suitable fingerprints matched to available hardware" error

    - by Alex
    I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session. I receive an error below the gnome login that says "Could not locate any suitable fingerprints matched to available hardware." What is causing this? here are the contents of /etc/pam.d/common-auth file # # /etc/pam.d/common-auth - authentication settings common to all services # # This file is included from other service-specific PAM config files, # and should contain a list of the authentication modules that define # the central authentication scheme for use on the system # (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the # traditional Unix authentication mechanisms. # # As of pam 1.0.1-6, this file is managed by pam-auth-update by default. # To take advantage of this, it is recommended that you configure any # local modules either before or after the default block, and use # pam-auth-update to manage selection of other modules. See # pam-auth-update(8) for details. # here are the per-package modules (the "Primary" block) auth sufficient pam_fprint.so auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config #auth sufficient pam_fprint.so #auth required pam_unix.so nullok_secure

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • understand SimpleTimeZone and DST Test

    - by Cygnusx1
    I Have an issue with the use of SimpleTimeZone class in Java. First, the JavaDoc is nice but not quite easy to understand in regards of the start and end Rules. But with the help of some example found on the web, i managed to get it right (i still don't understand why 8 represents the second week of a month in day_of_month!!! but whatever) Now i have written a simple Junit test to validate what i understand: package test; import static org.junit.Assert.assertEquals; import java.sql.Timestamp; import java.util.Calendar; import java.util.GregorianCalendar; import java.util.SimpleTimeZone; import org.apache.log4j.Logger; import org.junit.Test; public class SimpleTimeZoneTest { Logger log = Logger.getLogger(SimpleTimeZoneTest.class); @Test public void testTimeZoneWithDST() throws Exception { Calendar testDateEndOut = new GregorianCalendar(2012, Calendar.NOVEMBER, 4, 01, 59, 59); Calendar testDateEndIn = new GregorianCalendar(2012, Calendar.NOVEMBER, 4, 02, 00, 00); Calendar testDateStartOut = new GregorianCalendar(2012, Calendar.MARCH, 11, 01, 59, 59); Calendar testDateStartIn = new GregorianCalendar(2012, Calendar.MARCH, 11, 02, 00, 00); SimpleTimeZone est = new SimpleTimeZone(-5 * 60 * 60 * 1000, "EST"); est.setStartRule(Calendar.MARCH, 8, -Calendar.SUNDAY, 2 * 60 * 60 * 1000); est.setEndRule(Calendar.NOVEMBER, 1, Calendar.SUNDAY, 2 * 60 * 60 * 1000); Calendar theCal = new GregorianCalendar(est); theCal.setTimeInMillis(testDateEndOut.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("End date Should be In DST", true, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateEndIn.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("End date Should be Out DST", false, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateStartIn.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("Start date Should be in DST", true, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateStartOut.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("Start date Should be Out DST", false, theCal.getTimeZone().inDaylightTime(theCal.getTime())); } } Ok, i want to test the date limits to see if the inDaylightTime return the right thing! So, my rules are : DST start the second sunday of March at 2am DST end the first sunday of november at 2am In 2012 (now) this give us the march 11 at 2am and November 4 at 2am You can see my test dates are set properly!!! Well here is the output of my test run: 2012-11-01 18:22:44,344 INFO [test.SimpleTimeZoneTest] - < Cal date = 2012-11-04 01:59:59.0 : Eastern Standard Time> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - < Cal use DST = true> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - < Cal In DST = false> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - <offset = -18000000> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - <DTS offset= 3600000> My first assert just fails and tell me that 2012-11-04 01:59:59 is not inDST... !!!!??? If i put 2012-11-04 00:59:59, the test pass! This 1 hour gap just puzzle me... can anyone explain this behavior? Oh, btw, if anyone could elaborate on the : est.setStartRule(Calendar.MARCH, 8, -Calendar.SUNDAY, 2 * 60 * 60 * 1000); Why 8 means second week of march... and the -SUNDAY. I can't figure out this thing on a real calendar example!!! Thanks

    Read the article

  • Using DNFS for test purposes

    - by rene.kundersma
    Because of other priorities such as bringing the first v2 Database Machine in Netherlands into production I did spend less time on my blog that planned. I do however like to tell some things about DNFS, the build-in NFS client we have in Oracle RDBMS since 11.1. What DNFS is and how to set it up can all be found here . As you see this documentation is actually the "Clusterware Installation Guide". I think that is weird, I would expect this to be part of the Admin Guide, especially the "Tablespace" chapter. I do however want to show what I did not find in the documentation that quickly (and solved after talking to my famous colleague "the prutser"): First, a quick setup: 1. The standard ODM library needs to be replaced with the NFS ODM library: [oracle@ocm01 ~]$ cp $ORACLE_HOME/lib/libodm11.so $ORACLE_HOME/lib/libodm11.so_stub [oracle@ocm01 ~]$ ln -s $ORACLE_HOME/lib/libnfsodm11.so $ORACLE_HOME/lib/libodm11.so After changing to this library you will notice the following in your alert.log: Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 2.0 2. The intention is to mount the datafiles over normal NAS (like NetApp). But, in case you want to test yourself and use an exported NFS filesystem, it should look like the following: [oracle@ocm01 ~]$ cat /etc/exports /u01/scratch/nfs *(rw,sync,insecure) Please note the "insecure" option in the export, since you will not be able to use DNFS without it if you export a filesystem from a host. Without the "insecure" option the NFS server considers the port used by the database "insecure" and the database is unable to acquire the mount: Direct NFS: NFS3ERR 1 Not owner. path ocm01.nl.oracle.com mntport 930 nfsport 2049 3. Before configuring the new Oracle stanza for NFS we still need to configure a regular kernel NFS mount: [root@ocm01 ~]# cat /etc/fstab | grep nfs ocm01.nl.oracle.com:/u01/scratch/nfs /incoming nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 4. Then a so called Oracle-'nfstab' needs to be created that specifies what the available exports to use: [oracle@ocm01 ~]$ cat /etc/oranfstab server:ocm01.nl.oracle.com path:192.168.1.40 export:/u01/scratch/nfs mount:/incoming 5. Creating a tablespace with a datafile on the NFS location: SQL create tablespace rk datafile '/incoming/rk.dbf' size 10M; Tablespace created. Be sure to know that it may happen that you do not specify the insecure option (like I did). In that case you will still see output from the query v$dnfs_servers: SQL select * from v$dnfs_servers; ID SVRNAME DIRNAME MNTPORT NFSPORT WTMAX RTMAX -- -------------------- ----------------- --------- ---------- ------ ------ 1 ocm01.nl.oracle.com /u01/scratch/nfs 684 2049 32768 32768 But, querying v$dnfsfiles and v$dnfs_channels will now return any result, and indeed, you will see the following message in the alert-log when you create a file : Direct NFS: NFS3ERR 1 Not owner. path ocm01.nl.oracle.com mntport 930 nfsport 2049 After correcting the export: SQL select * from v$dnfs_files; FILENAME FILESIZE PNUM SVR_ID --------------- -------- ------ ------ /incoming/rk.dbf 10493952 20 1 Rene Kundersma Oracle Technology Services, The Netherlands

    Read the article

  • Looking for home networking hardware and software advice

    - by phobos7
    Note: I originally wrote this up in a blog post. I've removed any affiliate links that I put in my original post to ensure I don't annoy anybody. I've recently moved home and I now need to go to the trouble of sorting out my home network yet again. We had Virgin broadband in Hertford but you can't get Virgin in the street we've moved to so I've had to go with O2 Broadband. Normally I prefer to use my own hardward, and previously used the DLink DIR-655 router which was great, but in this situation I am using the O2 Wirelss Box III since I only have an old Netgear DG834PN Wireless G modem router and I'd rather be using Wireless N. Anyway, the place we have moved into has only one phone point in the hallway, has the best TV point in one room and the best place to put the TV and other entertainment stuff in yet another room. So, networking the house up for Internet and TV is required. The diagram below shows the things that I'll have in my home network but there are three points where I'm not quite sure what hardware to us. Wireless Access Point/Bridge, that acts only as a wireless to wire bridge and not an AP, that links up a Media Centre/PC and a couple of consoles to the network. I'm pretty much settled on us an Acer Aspire Revo R3600 as my media PC, probably with Ubuntu or Windows and XBMC installed. Wireless Access Point/Bridge, that acts only as a wireless to wire bridge and not an AP, that links up a device that can decode and stream TV from a TV aerial across the network. The device that is connected to 2). At the moment I'm considering a HDHomeRun by SiliconDust. At the moment I'm considering either the TP LINK TL-WA701ND 150Mbps Wireless Lite N Access Point (very cheap at Amazon) or the Netgear 5 GHz Wireless-N HD Access Point/Bridge. I'd love to get some insight into what you would do in my situation. What Wireless Access Point/Bridge should I put at points 1) and 2)? What device should I choose for point 3) that can decode and stream a TV signal? Is the Acer Aspire Revo R3600 a good choice? ![alt text][6] Note 2: I've also posted this question on AVForums.

    Read the article

  • Is it possible to force the WCF test client to accept a self-signed certificate?

    - by Lawrence Johnston
    I have a WCF web service running in IIS 7 using a self-signed certificate (it's a proof of concept to make sure this is the route I want to go). It's required to use SSL. Is it possible to use the WCF Test Client to debug this service without needing a non-self-signed certificate? When I try I get this error: Error: Cannot obtain Metadata from https:///Service1.svc If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: https:///Service1.svc Metadata contains a reference that cannot be resolved: 'https:///Service1.svc'. Could not establish trust relationship for the SSL/TLS secure channel with authority ''. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure.HTTP GET Error URI: https:///Service1.svc There was an error downloading 'https:///Service1.svc'. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure.

    Read the article

  • I need to duplicate software packages installed on one HPUX system to a test system

    - by oneodd1
    I am very inexperienced with HPUX and need to duplicate a production server for a test environment. I have 11.11 on the prod server and have completed the base install on the test server. What I need is a way to add the installed packages from the prod server to the test server. Unfortunately I haven't the foggiest idea how to do this. I've thought of using ignite backup and restore but don't have matching tape types between the two. The other thought was using swlist to gather the installed packages and then going to the website to download and install on the test environment. Has anyone successfully done something like this? Pointers?

    Read the article

  • test of ICMP block

    - by Marcos
    In my bash scripts I have been using something like: until fping -u google.com; do echo "$0[$$] Network/DNS down?? $(date)" 1>&2 && sleep $(($RANDOM%(1 + ++trynum * 1) +1)).222; done to test for online connectivity. It halts in place, sleeping growing random intervals, until it can ping google.com again. Problem: At some sites ICMP pings are blocked altogether, and web pages are still reachable. What's a short way to test for this general case? Based on that test I will switch over to an http-based test like the exit status of curl -s google.com >/dev/null if that is a good one.

    Read the article

  • Bad linking in Qt unit test -- missing the link to the moc file?

    - by dwj
    I'm trying to unit test a class that inherits QObject; the class itself is located up one level in my directory structure. When I build the unit test I get the standard unresolved errors if a class' MOC file cannot be found: test.obj : error LNK2001: unresolved external symbol "public: virtual void * __thiscall UnitToTest::qt_metacast(char const *)" (?qt_metacast@UnitToTest@@UAEPAXPBD@Z) + 2 missing functions The MOC file is created but appears to not be linking. I've been poking around SO, the web, and Qt's docs for quite a while and have hit a wall. How do I get the unit test to include the MOC file in the link? ==== My project file is dead simple: TEMPLATE = app TARGET = test DESTDIR = . CONFIG += qtestlib INCLUDEPATH += . .. DEPENDPATH += . HEADERS += test.h SOURCES += test.cpp ../UnitToTest.cpp stubs.cpp DEFINES += UNIT_TEST My directory structure and files: C:. | UnitToTest.cpp | UnitToTest.h | \---test | test.cpp (Makefiles removed for clarity) | test.h | test.pro | stubs.cpp | +---debug | UnitToTest.obj | test.obj | test.pdb | moc_test.cpp | moc_test.obj | stubs.obj Edit: Additional information The generated Makefile.Debug shows the moc file missing: SOURCES = test.cpp \ ..\test.cpp \ stubs.cpp debug\moc_test.cpp OBJECTS = debug\test.obj \ debug\UnitToTest.obj \ debug\stubs.obj \ debug\moc_test.obj

    Read the article

  • How have you saved green by going green?

    - by Bob
    For the purpose of this question, I am interested in server/datacenter related hardware. Have you had any measureable amount of ROI by swapping existing hardware to more "green" or energy efficient hardware? For example, VMWare says you can reduce energy consumption by up to 80% by using virtualization. I have also heard of a cooling solution from HP which is suppose to reduce a small amount of engery usage (<25% I think). Google has also done something by integrating a UPS into their power supplies to reduce energy consumption. Any real world experiences would be great, but if you have any details on initial cost, savings and pay off time about what changes were make that would fantastic. I am not only interested in virtualization, I am interested in anything.

    Read the article

  • Windows Setup could not configure Windows to run on this computer's hardware

    - by Hello71
    The whole installation goes smoothly up to the point of "Completing installation ...". The monitor changes resolution, after which a standard dialog box pops up saying Windows Setup could not configure Windows to run on this computer's hardware Then, in a few seconds, the whole machine powers down. Trying to restart produces the message: STOP: c000021a {Fatal System Error} 0x00000000 (0xc0000001 0x00100448) OR it boots into Setup and comes up with the message: Windows Setup encountered an unexpected error... (This is not the actual error, just paraphrasing) I tried using the OEM restore instead of a regular install, but it fails with the same error. (Even though it worked before...) General specs: HP Pavilion Elite e9262f Intel Core i5-750 Processor ATI Radeon HD 4650 Hitachi HDT721010SLA360 ATA Device 6GB DDR3 RAM SuperMulti DVD Burner with LightScribe Some built-in Wi-Fi module http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01916917 I've tried disconnecting the wireless card and disabling the built-in Ethernet and Firewire via the BIOS, and replacing the wireless keyboard and mouse with wired USB ones. Didn't work. I've also tried changing the SATA controller settings in the BIOS to RAID, AHCI, and IDE, reinstalling each time I changed. Still not working. I think the reason why it is showing the Fatal System Error is because it didn't finish installing before it errored out and shut down, so the system is left in an inconsistent state. I've tried 3 different copies (including the OEM restore) of Windows 7 now, and they're all failing at the same point, with the same error message. I've tried to install Windows 7 maybe 10 times already, with the exact same error message at the exact same location. Hm... Interestingly, the 32-bit version of Windows 7 works, but the 64-bit version doesn't. Perhaps it was a badly burned disk? Reburning the 64-bit version still comes up with the same error. Here's a picture of the side of the case that clearly says it came with Windows 7 64-bit, along with the model number and CPU. sudo fdisk -l: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009896f Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 7 HPFS/NTFS /dev/sda2 14 94119 755906445 7 HPFS/NTFS /dev/sda3 119922 121602 13492224 7 HPFS/NTFS /dev/sda4 94120 119922 207257740+ 5 Extended /dev/sda5 119527 119922 3170769 82 Linux swap / Solaris /dev/sda6 107174 119526 99225441 83 Linux /dev/sda7 94120 107173 104856192 7 HPFS/NTFS Partition table entries are not in disk order

    Read the article

  • Configure spanning tree from HP to Cisco hardware

    - by Tim Brigham
    I have three switches I'd like to configure in a loop - a Cisco stack (3750s) and two HP 2900 series. Each is connected to the next with a 10 gig backplane of one form or another. How do I configure the spanning tree on these systems to make this function correctly? From the documents I've looked at it looks like I need to set both sets of hardware to use MST mode but I'm not sure past that point. The trunking, etc is all set up as needed. HP Switch 1 A4 connected to Cisco 1/0/1. HP Switch 2 B2 connected to Cisco 2/0/1. HP Switch 1 A2 connected to HP Switch 2 A1. HP Switch 1 show spanning-tree Multiple Spanning Tree (MST) Information STP Enabled : Yes Force Version : MSTP-operation IST Mapped VLANs : 1-4094 Switch MAC Address : 0021f7-126580 Switch Priority : 32768 Max Age : 20 Max Hops : 20 Forward Delay : 15 Topology Change Count : 352,485 Time Since Last Change : 2 secs CST Root MAC Address : 0018ba-c74268 CST Root Priority : 1 CST Root Path Cost : 200000 CST Root Port : 1 IST Regional Root MAC Address : 0021f7-126580 IST Regional Root Priority : 32768 IST Regional Root Path Cost : 0 IST Remaining Hops : 20 Root Guard Ports : TCN Guard Ports : BPDU Protected Ports : BPDU Filtered Ports : PVST Protected Ports : PVST Filtered Ports : | Prio | Designated Hello Port Type | Cost rity State | Bridge Time PtP Edge ----- --------- + --------- ---- ---------- + ------------- ---- --- ---- ... A1 | Auto 128 Disabled | A2 10GbE-CX4 | 2000 128 Forwarding | 0021f7-126580 2 Yes No A3 10GbE-CX4 | Auto 128 Disabled | A4 10GbE-SR | 2000 128 Forwarding | 0021f7-126580 2 Yes No HP Switch 2 show spanning-tree Multiple Spanning Tree (MST) Information STP Enabled : Yes Force Version : MSTP-operation IST Mapped VLANs : 1-4094 Switch MAC Address : 0024a8-cd6000 Switch Priority : 32768 Max Age : 20 Max Hops : 20 Forward Delay : 15 Topology Change Count : 19,623 Time Since Last Change : 32 secs CST Root MAC Address : 0018ba-c74268 CST Root Priority : 1 CST Root Path Cost : 202000 CST Root Port : A1 IST Regional Root MAC Address : 0024a8-cd6000 IST Regional Root Priority : 32768 IST Regional Root Path Cost : 0 IST Remaining Hops : 20 Root Guard Ports : TCN Guard Ports : BPDU Protected Ports : BPDU Filtered Ports : PVST Protected Ports : PVST Filtered Ports : | Prio | Designated Hello Port Type | Cost rity State | Bridge Time PtP Edge ----- --------- + --------- ---- ---------- + ------------- ---- --- ---- ... A1 10GbE-CX4 | 2000 128 Forwarding | 0021f7-126580 2 Yes No A2 10GbE-CX4 | Auto 128 Disabled | B1 SFP+SR | 2000 128 Blocking | a44c11-a67c80 2 Yes No B2 | Auto 128 Disabled | Cisco Stack 1 show spanning-tree ... (additional VLANs) VLAN0100 Spanning tree enabled protocol ieee Root ID Priority 1 Address 0018.bac7.426e Cost 2 Port 107 (TenGigabitEthernet2/1/1) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32868 (priority 32768 sys-id-ext 100) Address a44c.11a6.7c80 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Te1/1/1 Desg FWD 2 128.53 P2p Te2/1/1 Root FWD 2 128.107 P2p

    Read the article

  • Multiple test Active Directory envirovments hand in hand with production domain controllers

    - by MadBoy
    What's the best approach of having multiple test environments next to production one? We have multiple programming teams that build solutions that use Active Directory very often. We have tried different approaches, starting with their own domain controllers (in same subnet), or additional OU's in our production AD that the team gets control over and can create/delete accounts within that one OU. We thought of possible 4 solutions: Setting up separate OU's in ou production env. Creating subdomains for our contoso.com domain like test.contoso.com, something.contoso.com and delegating control to the teams (would we need additional DC's or the two that we have already would be enough to hold this? Setting up additional test domain controler that has a trust to our main domain and all teams can use the test domain controler as they please. Setting up single domain controller for every team/project. We're taking in consideration amount of resources needed, security (for example having multiple domain controlers with multiple passwords may lead users to use simpler passwords) and overall best practices for this scenario.

    Read the article

  • Open-source navigation software and 3rd party hardware

    - by anttir
    I'm a bit fed up with the current navigator (TomTom) as it turned to adware after six months of use. "Please buy new maps at www.tomtom.com, click this button to see what you wanted to do". Is there any (good) OSS navigation software with support for proprietary hardware? I'm perfectly happy to purchase separate maps and hardware for the software as long as I don't have to give my money to TomTom or Navigon.

    Read the article

  • How to decide on what hardware to deploy web application

    - by Yuval A
    Suppose you have a web application, no specific stack (Java/.NET/LAMP/Django/Rails, all good). How would you decide on which hardware to deploy it? What rules of thumb exist when determining how many machines you need? How would you formulate parameters such as concurrent users, simultaneous connections and DB read/write ratio to a decision on how much, and which, hardware you need? Any resources on this issue would be very helpful...

    Read the article

  • Why doesn't my test run tearDownAfterTestClass when it fails

    - by Memor-X
    in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this try { if(!mysqli_query($connection,$query)) { $this->assertTrue(false); } } catch (Exception $exc) { $msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL; $msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;; $this->fail($msg); } the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass so i am wondering, why isn't my tearDownAfterTestClass running when a test fails NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

    Read the article

  • Database or website of kernel config files ?

    - by Kami
    I've experienced some kernel panic after trying to compile gentoo kernel for a Sun UltraSPARC T5120 Server. The kernel panic came from a missing support for the SAS disk controller in the menu config. I've wasted so much time because I had no clue about the hardware I was using. I know that the kernel config depends on what you plan to do with your machine but I want to have a configuration file that at least match my hardware ! Is there a website or database that provides menuconfig's kernel configuration files for known or branded hardware like Dell Server or Apple computers ?

    Read the article

  • Test-Path returns differetn results in 64Bit and 32Bit powershell

    - by StarSpace
    I am developing a script which should run under 64 and 32Bit Powershell. Unfortunately it seems that Test-Path return different results in 64 and 32 Environment. Both sessions are running under same user, this user has full access on specific registry key. 64Bit Powershell >test-path HKLM:\SOFTWARE\Citrix\ProvisioningServices True 32Bit Powershell(x86) >test-path HKLM:\SOFTWARE\Citrix\ProvisioningServices False Any Idea?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >