Search Results

Search found 42646 results on 1706 pages for 'vbox question'.

Page 347/1706 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Firefox not displaying icons in KhanAcademy

    - by ADTC
    If you don't know what Khan Academy is, check it out. It's awesome. (For testing purpose you may view any video on the website.) My problem -- it's a minor problem, but annoying -- is that in Firefox (Windows 7), the icons below the video are shown as boxes with hex codes in them. This means the icons come from some font that isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I checked out the source and found that the font in question is called "FontAwesome". I found this question in S.O. that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. Also, would manually downloading the font and adding it to Windows' Fonts folder help? I tried this with the TTF font, and it does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • Can Windows logoff events be tracked?

    - by Massimo
    I'm working on an application to track network user logon/logoff events in an Active Directory domain; the application will work by auditing security logs on domain controllers. Auditing logon events can get somewhat tricky, but it can succesfully be done. My problem: how can I track logoff events? Based on some research I've done, it looks like these events are only logged locally on workstations, but not on DCs; also, the "lastLogoff" attribute exists on AD user objects, but it's not actually used by anyone. This is a very specific question: is something logged on DCs when a user logs off from a domain workstation? To clarify: I'm not intereseted in other auditing mehods, I can't deploy logon/logoff scripts and I can't install anything anywhere; I also know opened and closed network sessions are logged, but this is not what I'm looking for. I need to audit interactive logons and logoffs to domain workstations, and I can do this by only reading domain controllers security logs; reading each workstation's local event logs is out of question. If this can't be done, it's ok; but I need a clear answer on that. Can this be done? If yes, how?

    Read the article

  • Private staff network within public network

    - by pianohacker
    I'm the sysadmin at a small public library. Since I got here a few years ago, I've been trying to set up the network in a secure and simple way. Security is a little tricky; the staff and patron networks need to be separated, for security reasons. Even if I further isolated the public wireless, I'd still rather not trust the security of our public computers. However, the two networks also need to communicate; even if I set up enough VMs so they didn't share any servers, they need to use the same two printers at the very least. Currently, I'm solving this with some jerry-rigged commodity equipment. The patron network, linked together by switches, has a Windows server connected to it for DNS and DHCP and a DSL modem for a gateway. Also on the patron network is the WAN side of a Linksys router. This router is the "top" of the staff network, and has the same Windows server connected on a different port, providing DNS and DHCP, and another, faster DSL modem (separate connections are very useful, especially as we heavily depend on some cloud-hosted software). tl;dr: We have a public network, and a NATed staff network within it. My question is; is this really the best way to do this? The right equipment would likely make my job easier, but anything with more than four ports and even rudimentary management quickly becomes a heavy hit on our budget. (My original question was about an ungodly frustrating DHCP routing issue, but I thought I'd ask whether my network was broken rather than asking about the DHCP problem and being told my network was broken.)

    Read the article

  • picking a linux compatable motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

  • How to re-arrange Excel database from 1 long row, into 3 short rows of unequal lengths and automatically repeat the process?

    - by user326884
    This question is an extension/continuation of my previous question at How to re-arrange Excel database from 1 long row, into 3 short rows and automatically repeat the process? which was answered by Jason Lewis of which I'm grateful. But being a dummy in "Indirect' Excel function, I need assistance again : For example :- In Sheet A, Row 1 has the following data in each cell (all together 72 cells occupied): A1 B1 C1 D1 E1 F1 G1 H1 I1 J1 K1 L1 M1 N1 O1 P1 Q1 R1 S1 T1 U1 V1 W1 X1 Y1 Z1 AA1 AB1 AC1 AD1 AE1 AF1 AG1 AH1 AI1 AJ1 AK1 AL1 AM1 AN1 AO1 AP1 AQ1 AR1 AS1 AT1 AU1 AV1 AW1 AX1 AY1 AZ1 BA1 BB1 BC1 BD1 BE1 BF1 BG1 BH1 BI1 BJ1 BK1 BL1 BM1 BN1 BO1 BP1 BQ1 BR1 BS1 BT1 To be re-arranged into Sheet B in the following format: Row 1 : A1 B1 C1 D1 E1 F1 G1 H1 I1 J1 K1 L1 M1 N1 O1 P1 Q1 R1 S1 T1 U1 V1 W1 X1 Y1 Z1 AA1 AB1 AC1 AD1 AE1 AF1 AG1 AH1 AI1 Row 2 : AJ1 AK1 AL1 AM1 AN1 AO1 AP1 AQ1 AR1 AS1 AT1 AU1 AV1 AW1 AX1 AY1 AZ1 BA1 BB1 BC1 BD1 BE1 BF1 BG1 BH1 BI1 BJ1 BK1 Row 3 : BL1 BM1 BN1 BO1 BP1 BQ1 BR1 BS1 BT1 The Sheet A (database sheet) has a lot of rows (example 3,000 rows, each rows has 72 cells occupied with data), hence the Sheet B (reformatted database) is estimated to have 9,000 rows (i.e. 3 x 3,000) of unequal lengths. Thanking you in anticipation of your speedy response.

    Read the article

  • MS Excel 2010 - Using DSN + 32 bit drivers

    - by Kristiaan
    I need some advice as im running into a problem and so far i have been unable to find a solution. We have a set of reports developed in MS Excel that use DSN file to connect to data sources to retrieve data, these work fine on 32 / 64bit systems, however we are moving to a terminal server environment using windows 2008 R2 64Bit. The reports fail to run using the DSN's within this environment if we only have the 32bit drivers installed and configured in the ODBC settings, the minute we install the 64Bit drivers the software works. Is there a way / Method of getting Excel or the DSN file to NOT use the 64Bit driver, but force it to use the 32bit driver. ANSWERED - But due to low user score i cannot "answer" my own question... Sadly there is no way to-do what i want to-do, without a lot of very nasty and not 100% perfect reg hacks. If you need to access 32bit ODBC data sources the application in question has to be 32Bit. here is a link to just one forum post i found relating to this type of problem, it appears the only way i would be able to accomplish this is to remove the 64bit version of office and install the 32bit version instead of it. http://social.msdn.microsoft.com/Forums/en-US/accessdev/thread/5108f337-f06a-4518-afe3-d3c1abd040ef/

    Read the article

  • What Is The Proper Laptop Battery Care While Running Laptop Solely On Battery?

    - by Boris_yo
    Because of convenience, I had to move my laptop to another room away from room where I always ran laptop on UPS without using battery. Since so far I always run laptop on battery, I question the proper usage to prolong battery life. Currently I run laptop on battery with power supply so battery is constantly being charged until it is full 100% and when it is, I disconnect power supply and continue working until battery meter shows 10% remaining. That's when I plug in power supply and let it charge until 100% once again while I work. But it takes a lot of time to fully charge laptop while working since my power supply is 60W which should be the reason of such slow charge and I think the kind of charger that I use is express charger. The thought of charging laptop until full, all while doing my work makes me think that if it takes way more time to charge, it might keep battery running warm for the period of charging time which brings me to question about whether I should keep running laptop as I've described above or it would be better to leave power supply constantly connected to laptop to keep battery between 99%-100%? On one hand it won't keep battery warm but it will try to frequently supply charge to battery once it gets 99% to replenish charge to 100% (which might reduce battery life?). On the other hand if I'll keep working solely on battery and recharge it when below 10%, the battery will get warm but only when charged. Can anybody suggest the correct way of running laptop on battery to ensure better battery life? Dell Latitude E6420 Windows 7 64-bit

    Read the article

  • MySQL select query result set changes based on column order

    - by user197191
    I have a drupal 7 site using the Views module to back-end site content search results. The same query with the same dataset returns different results from MySQL 5.5.28 to MySQL 5.6.14. The results from 5.5.28 are the correct, expected results. The results from 5.6.14 are not. If, however, I simply move a column in the select statement, the query returns the correct results. Here is the code-generated query in question (modified for readability). I apologize for the length; I couldn't find a way to reproduce it without the whole query: SELECT DISTINCT node_node_revision.nid AS node_node_revision_nid, node_revision.title AS node_revision_title, node_field_revision_field_position_institution_ref.nid AS node_field_revision_field_position_institution_ref_nid, node_revision.vid AS vid, node_revision.nid AS node_revision_nid, node_node_revision.title AS node_node_revision_title, SUM(search_index.score * search_total.count) AS score, 'node' AS field_data_field_system_inst_name_node_entity_type, 'node' AS field_revision_field_position_college_division_node_entity_t, 'node' AS field_revision_field_position_department_node_entity_type, 'node' AS field_revision_field_search_lvl_degree_lvls_node_entity_type, 'node' AS field_revision_field_position_app_deadline_node_entity_type, 'node' AS field_revision_field_position_start_date_node_entity_type, 'node' AS field_revision_body_node_entity_type FROM node_revision node_revision LEFT JOIN node node_node_revision ON node_revision.nid = node_node_revision.nid LEFT JOIN field_revision_field_position_institution_ref field_revision_field_position_institution_ref ON node_revision.vid = field_revision_field_position_institution_ref.revision_id AND (field_revision_field_position_institution_ref.entity_type = 'node' AND field_revision_field_position_institution_ref.deleted = '0') LEFT JOIN node node_field_revision_field_position_institution_ref ON field_revision_field_position_institution_ref.field_position_institution_ref_target_id = node_field_revision_field_position_institution_ref.nid LEFT JOIN field_revision_field_position_cip_code field_revision_field_position_cip_code ON node_revision.vid = field_revision_field_position_cip_code.revision_id AND (field_revision_field_position_cip_code.entity_type = 'node' AND field_revision_field_position_cip_code.deleted = '0') LEFT JOIN node node_field_revision_field_position_cip_code ON field_revision_field_position_cip_code.field_position_cip_code_target_id = node_field_revision_field_position_cip_code.nid LEFT JOIN node node_node_revision_1 ON node_revision.nid = node_node_revision_1.nid LEFT JOIN field_revision_field_position_vacancy_status field_revision_field_position_vacancy_status ON node_revision.vid = field_revision_field_position_vacancy_status.revision_id AND (field_revision_field_position_vacancy_status.entity_type = 'node' AND field_revision_field_position_vacancy_status.deleted = '0') LEFT JOIN search_index search_index ON node_revision.nid = search_index.sid LEFT JOIN search_total search_total ON search_index.word = search_total.word WHERE ( ( (node_node_revision.status = '1') AND (node_node_revision.type IN ('position')) AND (field_revision_field_position_vacancy_status.field_position_vacancy_status_target_id IN ('38')) AND( (search_index.type = 'node') AND( (search_index.word = 'accountant') ) ) AND ( (node_revision.vid=node_node_revision.vid AND node_node_revision.status=1) ) ) ) GROUP BY search_index.sid, vid, score, field_data_field_system_inst_name_node_entity_type, field_revision_field_position_college_division_node_entity_t, field_revision_field_position_department_node_entity_type, field_revision_field_search_lvl_degree_lvls_node_entity_type, field_revision_field_position_app_deadline_node_entity_type, field_revision_field_position_start_date_node_entity_type, field_revision_body_node_entity_type HAVING ( ( (COUNT(*) >= '1') ) ) ORDER BY node_node_revision_title ASC LIMIT 20 OFFSET 0; Again, this query returns different sets of results from MySQL 5.5.28 (correct) to 5.6.14 (incorrect). If I move the column named "score" (the SUM() column) to the end of the column list, the query returns the correct set of results in both versions of MySQL. My question is: Is this expected behavior (and why), or is this a bug? I'm on the verge of reverting my entire environment back to 5.5 because of this.

    Read the article

  • PHP Causing Segmentation Fault & Apache Blank Response

    - by Joe
    I recently updated a Debian Lenny server to php 5.3,5 using the dotdeb source. Soon after doing so certain (but not all) sites on the server stopped responding to requests. A blank response would be returned - no headers, no content, nothing. I found this related question on stackoverflow which seems to describe something similar and used the same code the user had in their answer to see if I could replicate the issue: <?php class A { public function __construct() { new B; } } class B { public function __construct() { new A; } } new A; print 'Loaded Class A'; ?> This triggered the problem - the page returned absolutely nothing despite the original question stating this was fixed in PHP 5.5.0. No CPU block as you'd expect, no wait, just an almost instant zero response. I then ran this same code from the cli (php -f test.php) and the only output I got was 'Segmentation fault'. Tailing the kernel log I've spotted: Feb 16 07:04:06 creature kernel: [192203.269037] php[17710] general protection ip:76ef37 sp:7fff155e9bb0 error:0 in php5[400000+870000] Feb 16 08:57:31 creature kernel: [199639.699854] apache2[31136]: segfault at 7fff13a84fe0 ip 7f730514ea40 sp 7fff13a85008 error 6 in libphp5.so[7f7304ce8000+915000] All extremely odd and I'm not sure what it's pointing to/what I should do to debug this further. As I said some sites work but code such as the above definitely trigger it. Not that the sites I want to server have code like that - it's just an example. Any help is much appreciated!

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • Home Server: storage virtualisation, what to choose?

    - by Huygens
    I'm looking for virtualisation solutions for storage and OS for a home server. A sort of private cloud where I manage the storage space independently of the VM one. This question focus on storage management. (I have another question related to the VM/compute instance management). Here my environement and wishes. Server: HP Proliant MicroServer with 8 GB RAM (AMD Turion dual core with AMD-V technology) with 1 250GB system disk and up to 4 HDD (2 TB) for "data" OS types: only Linux (perhaps a *BSD VM in the future) Linux distributions do not matter, I'm familiar with RHEL, Fedora, Suse, Ubuntu, but any other recommandation will be fine The 4 HDD is going to be a software RAID array, probably RAID 5. storage should be "virtualised/cloudified": easy to extend: if I add a NAS on the network, I can include the NAS space capacity within this storage space as one virtual disk. This can be a NAS, an external HDD or another server. cluster FS or S3 style space or OpenStack block storage? Whatever is easier to manage/maintain and easy to integrate/plug to VM/compute instance. I would prefer free (libre, as in a free speach) and open source tools. But it does not have to be free as in a free beer. Note: the VMs I intend to run on top of this server are one dedicated to backup, one for a "owncloud/dropbox"-like service and perhaps one for media server (hosting video and photos). I'm not sure if traditional VMs or compute instance are the most suitable for this.

    Read the article

  • How do I calculate the cost of printing a given page?

    - by Alenanno
    I have seen questions like How much does a square inch of ink cost and How much more will a high-dpi image cost to print?, but mine isn't asking neither about a specific case, nor about how much something costs, as that would depend on the toner, for example. Rather, I was wondering how should I go about calculating the cost of printing a given page. Note that "given page" should be seen as a sort of x, i.e. the answer should be applicable in any case; I'd like this question to provide a good reference for those who want to calculate this cost. What should be taken into consideration? The cost of a single page (the paper only) is easily checkable, since you divide the cost of the whole package for the number of pages in the package itself. But how do I calculate the cost of the ink/toner? Which could translate to: how do I calculate the Ink Density1 for a given printer? I know it depends on quality of the printer itself, the type, the quality of the image being printed, the very nature of what I'm going to print, etc. But again, the focus of my question is not on the variables of this case, but rather the constants, hoping the math simile works for this case too. 1: Total amount of ink in one area of the page.

    Read the article

  • Does the same lame settings (--alt-preset standard) have differrent names?

    - by erikric
    I've always used windows, and therefore EAC to rip my CDs, but since I've started using Ubuntu more often, I decided to try to rip some albums there. I ended up using k3b (since I found it in the Ubuntu Software center. Tried to install RubyRipper first, but when 'sudo apt-get install ' or UDC fails, a Windows user like me is lost) The real question here is about the settings for the lame encoder. I'm used to just writing --alt-preset standard, and everything works like a charm, but the default in k3b look like this: lame -r --bitwidth 16 --little-endian -s 44.1 -h --tt %t --ta %a --tl %m --ty %y --tc %c --tn %n - %f I assume these are some sensible lame settings, and not a malicious perl script (although it looks like it). It seems to me like some of these ought to be there, and that I can not overwrite the whole thing with my good ol' --alt-preset. So, the question is do I need to replace anything, or is -h the same as old --alt-preset? Is it a difference between '--preset standard' and '--alt-preset standard'? And are those the same as -V 2?

    Read the article

  • Symantec NetBackup restore - Incremental backup

    - by w0051977
    We are using Net Backup as a corporate solution. Incremental backups are taken daily during the week and then a weekly backup is done at the weekend (Saturday). My colleague has restored a folder to how it stood at 14:00 on a Tuesday. The problem is that the restore is taking files from the weekend backup if they did not exist at that point in time of the restore. For example, the folder we are restoring should look like this (this is how it looked on Tuesday at 14:00): Folder1 (folder name) Test.txt Test1.txt Test2.txt The folder looked like this at the weekend when the full restore was done (even though it did exist at the weekend when the full backup ran): Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt The actual folder restored looks like this: Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt Test3.txt should not be restored because it did not exist at the point of the restore. Is there a setting somewhere that we are missing. The folder in question is 200GB - the example above is for simplification. I realise this is a basic question.

    Read the article

  • Adding users to Sharepoint when they are not in the same domain

    - by jim-work
    Bear with me as I explain this, I'm working my way through Sharepoint access as I go, but I'll clarify my question as I go along. The Problem We have about 10,000 users who need access to our Sharepoint 2005 based reporting. Because our organization is migrating from one domain to another, we need to add each user twice, once for each domain. For the current domain, this is no problem, we've got a powershell script that I tweaked to add all the users in a given CSV file, this takes about 5 minutes to run. The big problem we're having is with users who are NOT in our currently active domain. Because the sharepoint server cannot authenticate the new users, we can't add them directly. What we're doing is creating a temp user, then using STSADM.EXE to migrate that test user to the proper domain/user_name for each of our 10,000 users. The creation and migration takes about 5 seconds per user, or well over 12 hours to run. The Question Has anyone encountered this before? Is there a way to add users without requiring AD authentication? Why is STSADM.EXE running so slow? Thanks a lot for any advice or direction anyone can give me.

    Read the article

  • Apache inflate application/ with mod_filter

    - by BGT
    I need to prevent pdf objects from being gzipped. Really, this only needs to take place if the request is from the Mozilla browser (but since I can't get something as seemingly simple as no-gzip for application/pdf, I figure it's wiser to start there). From looking at the apache documentation on mod_filter, I've got the following: <Location /> FilterDeclare gzipDeflate CONTENT_SET FilterDeclare gzipInflate CONTENT_SET FilterProvider gzipDeflate deflate req=User-Agent $Mozilla/ FilterProvider gzipInflate inflate resp=Content-Type $application/ FilterChain +gzipDeflate +gzipInflate </Location> From my testing, the gzipDeflate filter is doing its job and all the pages without the Content-Type starting with application are being gzipped. But, the gzipInflate doesn't seem to be working at all. I've inspected the response in Firebug and verified that the Content-Type being sent down is application/pdf. I'll go ahead and ask a potentially stupid question though: The response's Content-Type header in its entirety read "application/pdf; charset=Windows-1252". Does that make any sort of difference or is $application/ presumably enough to catch that? Any help is greatly appreciated. One other point, the URL that returns the pdf object does not have the .pdf extension. The pdf itself is stored in an Oracle database as a blob and appended to the page when appropriate (all the urls in the system use the same baseline). This was part of an original inquiry by a helpful member at stackoverflow who pointed me towards mod_filter and suggested I post the question here.

    Read the article

  • AMD Phenom II X2 555 BE, core unlock suddenly not working?

    - by user328271
    I've had a Phenom II X2 555 BE for around 2 - 3 years. When I got it, I immediately core unlocked it with my ECS A880GM-A3 mobo, which makes it turn into a Phenom II X4 B55. A few days ago, I installed Windows 7 64 bit to compensate for my 4 gigs of ram. When I start my system with its cores unlocked, it will restart after the BIOS screen. If I disable the core unlock, it boots to OS just fine. My question is: Does 64 bit OS makes a difference in core unlocking? Does my 3rd and 4th core burnt out? Also extra info: I tried core unlocking but keeping the 3rd and 4th core disabled and it still won't boot into OS. Could it be motherboard problems? Sorry for bad English. I will try to give additional information if needed. Thanks! Also it is worth mentioning I'm no computer expert but I tried to make my explanation as short as possible. I also asked my question on TomsHardware, but I had no answer till now so I decided to ask here too. anyone...?

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • AWS own email domain and some generic questions

    - by John Brunner
    I'm getting started with Amazon Web Services and I have a few question I'm not sure about. As every (company) webpage I want to use an "[email protected]" email adress, but how is that done? I looked up at godaddy.com (for domain registration), the offer me an email adress like I want, but for 3 dollars per month. Is this possible with AWS? Because at AWS you have just a complex domain which is not very userfriendly or serious. Also I want to host my dynamic webpage on the amazon cloud, but I'm not sure if I'm doing that right. I've read many guides, and all I know is that I have to purchase a Elastic Compute Cloud, and a Simple Storage Service... and every guide is working with the basic linux package, why not Windows? Is it more expensive? I just want to host a mySQL Server for the dynamic webpage, which is reached over a normal domain. And one last question, if I sign up for an AWS account it asks me for an email account. But I found it a little bit unserious to write there my free-webmailer-adress... How is it done the normal way? Thanks in advance! Best regards, john.

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • Is it possible to change "working directory" of XeTeX?

    - by Herbert Sitz
    Using XeTeX there are many working files that get created in process of producing the pdf, and they litter the directory where my main .tex file is. Is it possible to change the working directory of XeTeX so that it stores all these scratch files in some other directory, out of the way? There is a previous question on Superuser.com that discusses a utility that cleans up the working files by deleting them after they're produced: http://superuser.com/questions/95712/how-to-avoid-littering-ones-tex-directories-with-intermediate-files That solution doesn't work for me since I'm using XeTeX, but also it seems like it would be preferable to simply be able to designate a "scratch" directory where all working files are saved. I haven't been able to find any info on how to do it though. Is there a way? (My question is prompted partly because of the fact that I often work with files in a directory that is shared using DropBox, so it creates a lot of unnecessary traffic if files are getting created and destroyed willy nilly. I don't know if it affects speed in any way, but the idea of having a separate working directory that is not shared/replicated by DropBox would be a cleaner solution, even if I could use the method suggested in the earlier thread.)

    Read the article

  • How to kill tasks in Windows 7 when even Task Manager won't open or respond?

    - by endolith
    Occasionally one of my computers will get so bogged down that everything locks up, Ctrl+Alt+Del doesn't work, Task Manager won't open, or they work, but are opening so slowly that it will take hours or days to shut down other processes and regain control of the computer, etc. Is there a way to, for instance, force Task Manager to be highest priority so it always opens immediately with Ctrl+Shift+Esc even when some other process/driver is hogging the CPU? Is there some other program that can run in the background and open immediately like this? This question isn't about fixing "underlying problems". No matter how much memory you have, it's still possible for a rogue process to eat it all up and lock up the computer in page fault thrashing, hog the CPU, etc. This question is about how to take back control of the computer when that happens. Basically when these kind of lock-ups happen, I want to open some kind of task manager that pauses every other process and allows me to kill one of them, and then let everything resume so I can save my work, etc. Otherwise my only option is to hold down the power button. Antifreeze is supposed to do exactly what i want, pausing all other applications and starting a task manager to kill the offender, but in my testing, it actually does neither.

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • How to replicate a windows servers (IIS,Files,ConfigurationState)?

    - by Geo
    Maybe a better question is: What is the closest competitor for DoubleTake? I am looking to replicate a windows production server in case it fails have a immediate backup. Any idead? NOTE 1: I forget to add that this server is on the EC2 Amazon Cloud. NOTE 2: The main situation we have is recreating the configuration settings like IIS, FTP Server, SQL Server, SVN Server. NOTE 3: So far I have been giving three options as answers for my original question: AppAssurance -- After talking to their sales team they do not support Amazon as cloud provider. Basically there is a technical need to be able to reboot from a disk or similar media. So ESX Virtual machine environment will work, but not the EC2. Acronis -- which works as a backup in ghost style. This will work for other type of scenarios. Use the Amazon EC2 API -- This option is ideal, but only works if you are developing a cloud application rather than hosting a regular application in a cloud scenario. This means that I am still looking for the answer. Any other ideas.

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >