Search Results

Search found 6083 results on 244 pages for 'graphical algorithm'.

Page 179/244 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • Using SSLv3 - Enabling Strong Ciphers Server 2008

    - by Igor K
    I've disabled SSLv2 and SSLv3 is on. However I cannot connect to a remote server which fails with The client and server cannot communicate, because they do not possess a common algorithm Ran an SSL check (http://www.serversniff.net/sslcheck.php) on the remote server and ours, and noticed none of the ciphers they accept we have on our server. How can this be configured? (Windows Web Server 2008) Remote Server Accepted SSL ciphers: DHE-RSA-AES256-SHA AES256-SHA EDH-RSA-DES-CBC3-SHA DES-CBC3-SHA DHE-RSA-AES128-SHA AES128-SHA Our server by default accepts: DES-CBC3-SHA RC4-SHA RC4-MD5

    Read the article

  • Windows Server 2008 Antivirus Software with an API

    - by Dave Jellison
    I'm looking for an Antivirus package that is compliant with Windows Server 2008. That's not the hard part. What I need is an API layer on the Antivirus that i can call from managed .net code. For example: I am developing an Asp.Net (C#) website that allows users to upload files to the web server which the web site resides on. We have full control of the server so there are no security/rights issues on the server. I need to be able to run the antivirus algorithm on the newly uploaded files without (hopefully) shelling out to a command-ilne version of the software. Does anyone know of such a package?

    Read the article

  • Windows: How to make programs think they're not running in a terminal server session?

    - by sinni800
    I am using the program "SoftXPand 2011 Duo" by Miniframe on my Windows 7 PC. It makes two workstations out of one computer. It uses the terminal services built into Windows to create the additional session. I use two screens, two keyboards and two mice to create this "illusion" of two computers. It works quite well and I can even play two different 3D games on the two screens attached to this single machine (using a Radeon HD5770 and a Core i5 2500k with 8 Gbytes RAM). There are a few downsides to this. I just found about one that is hidden on the first look. The sessions you are in (even on the first workstation) will identify as a terminal server session! Now some programs will run with limited effects (graphical), and some won't run at all. This also resulted in some games not running at all. They just say "Cannot be run in a terminal server session" and exit. I have already proven that top modern games (DirectX 10, 11) run just as good as on the same machine without SoftXPand, so this is a pretty artificial limitation! So, can I somehow hack my current session so it doesn't look like a terminal server session anymore? I. E. #include <windows.h> #pragma comment(lib, "user32.lib") BOOL IsRemoteSession(void) { return GetSystemMetrics( SM_REMOTESESSION ); } Will return FALSE? (Not a programming question! Just an example how programs detect if they're in a terminal server session!)

    Read the article

  • 284 GiB of data, 217.4 GiB of space

    - by Malfist
    I want to reinstall my OS, but I don't have the hard drive space to backup any more (I have a RAID 1 array, so I haven't done it for a while). In my /home I have 284.8 GiB of data, and I have a spare 250 GB (or 217.4 GiB) hard drive that I've been using for backup. What type of compression algorithm (if any) is capable of this type of compression? I don't care about the time, I have a quad core though, so something that utilizes all 4 cores would be great. I have tried 7zip with no success. Ran on one core for two days and failed because of lack of space. Any ideas?

    Read the article

  • ublas::bounded_vector<> being resized?

    - by n2liquid
    Now, seriously... I'll refrain from using bad words here because we're talking about the Boost fellows. It MUST be my mistake to see things this way, but I can't understand why, so I'll ask it here; maybe someone can enlighten me in this matter. Here it goes: uBLAS has this nice class template called bounded_vector<> that's used to create fixed-size vectors (or so I thought). From the Effective uBLAS wiki (http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?Effective_UBLAS): The default uBLAS vector and matrix types are of variable size. Many linear algebra problems involve vectors with fixed size. 2 and 3 elements are common in geometry! Fixed size storage (akin to C arrays) can be implemented efficiently as it does not involve the overheads (heap management) associated with dynamic storage. uBLAS implements fixed sizes by changing the underling storage of a vector/matrix to a "bounded_array" from the default "unbounded_array". Alright, this bounded_vector<> thing is used to free you from specifying the underlying storage of the vector to a bounded_array<> of the specified size. Here I ask you: doesn't it look like this bounded vector thing has fixed size to you? Well, it doesn't have. At first I felt betrayed by the wiki, but then I reconsidered the meaning of "bounded" and I think I can let it pass. But in case you, like me (I'm still uncertain), is still wondering if this makes sense, what I found out is that the bounded_vector<> actually can be resized, it may only not be greater than the size specified as template parameter. So, first off, do you think they've had a good reason not to make a real fixed<< size vector or matrix type? Do you think it's okay to "sell" this bounded -- as opposed to fixed-size -- vector to the users of my library as a "fixed-size" vector replacement, even named "Vector3" or "Vector2", like the Effective uBLAS wiki did? Do you think I should somehow implement a vector with fixed size for this purpose? If so, how? (Sorry, but I'm really new to uBLAS; just tried it today) I am developing a 3D game. Should uBLAS be used for the calculations involved in this ("hey, geometry!", per Effective uBLAS wiki)? What replacement would you suggest, if not? -- edit And just in case, yes, I've read this warning: It should be noted that this only changes the storage uBLAS uses for the vector3. uBLAS will still use all the same algorithm (which assume a variable size) to manipulate the vector3. In practice this seems to have no negative impact on speed. The above runs just as quickly as a hand crafted vector3 which does not use uBLAS. The only negative impact is that the vector3 always store a "size" member which in this case is redundant [or isn't it? I mean......]. I see it uses the same algorithm, assuming a variable size, but if an operation were to actually change its size, shouldn't it be stopped (assertion)? ublas::bounded_vector<float,3> v3; ublas::bounded_vector<float,2> v2; v3 = v2; std::cout << v3.size() << '\n'; // prints 2 Oh, come on, isn't this just plain betrayal?

    Read the article

  • linux/shell: change a file's modify timestamp relatively?

    - by index
    My Canon camera produces files like IMG_1234.JPG and MVI_1234.AVI. It also timestamps those files. Unfortunately during a trip to another timezone several cameras were used, one of which did not have the correct time zone set - meta data mess.. Now I would like to correct this. Proposed algorithm: 1 read file's modify date 2 add delta, i.e. hhmmss (preferred: change timezone) 3 write new timestamp Unless someone knows a tool or a combination of tools that do the trick directly, maybe one could simplify the calculation using epoch time (seconds since ..) and whip up a shell script. Any help appreciated!

    Read the article

  • Ubuntu 12.04/12.10 can't detect windows or any other partitions(Asus z77 UEFI BIOS)

    - by user971155
    I've recently completed tinkering my new pc(motherboard ASUS z77 with UEFI BIOS) and unfortunately not everything works quite well. After installing windows 7 ultimate on a single primary partition(SATA drive) I decided to allocate one more logical partition for additional needs. When I tried doing it with the manager - it said that it couldn't allocate requested size even though I certainly asked for much less than it was available. I thought that it might have been a windows issue and proceded to installing Ubuntu 12.10 x64. When the graphical interface loaded it showed me a message stating that it can't find any other operating system on the drive. When I used custom partioning option it showed me none of my current partions(including that with windows). However, when I boot with "Try Ubuntu" feature it does find them ! I find it weird though. Here's what the console present me with: ubuntu@ubuntu:~$ sudo os-prober /dev/sda1:Windows 7 (loader):Windows:chain ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00072b98 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 100020223 49906688 7 HPFS/NTFS/exFAT /dev/sda3 100022270 1250263039 575120385 5 Extended /dev/sda4 566669312 1250263039 341796864 83 Linux I also tried creating partitions from disk utility which results in error: , Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/sda, start=51211402240, size=1923000000, type=0x83 Entering MS-DOS parser (offset=0, size=640135028736) MSDOS_MAGIC found looking at part 0 (offset 1048576, size 104857600, type 0x07) new part entry looking at part 1 (offset 105906176, size 51104448512, type 0x07) new part entry looking at part 2 (offset 51211402240, size 588923274240, type 0x05) Entering MS-DOS extended parser (offset=51211402240, size=588923274240) readfrom = 51211402240 MSDOS_MAGIC found Exiting MS-DOS extended parser looking at part 3 (offset 290134687744, size 349999988736, type 0x83) new part entry Exiting MS-DOS parser MSDOS partition table detected containing partition table scheme = 1 got it Error: Can't have overlapping partitions. ped_disk_new() failed Here's what I get when I try to install the system i.stack.imgur.com/pjlb9.png, i.stack.imgur.com/g1lXN.png P.S. It's strange that I even can't create any more partitions neither with disk-utility nor with windows 7 native tools

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • Roguelike FOV problem

    - by Manderin87
    I am working on a college compsci project and I would like some help with a field of view algorithm. I works mostly, but in some situations the algorithm sees through walls and hilights walls the player should not be able to see. void cMap::los(int x0, int y0, int radius) { //Does line of sight from any particular tile for(int x = 0; x < m_Height; x++) { for(int y = 0; y < m_Width; y++) { getTile(x,y)->setVisible(false); } } double xdif = 0; double ydif = 0; bool visible = false; float dist = 0; for (int x = MAX(x0 - radius,0); x < MIN(x0 + radius, m_Height); x++) { //Loops through x values within view radius for (int y = MAX(y0 - radius,0); y < MIN(y0 + radius, m_Width); y++) { //Loops through y values within view radius xdif = pow( (double) x - x0, 2); ydif = pow( (double) y - y0, 2); dist = (float) sqrt(xdif + ydif); //Gets the distance between the two points if (dist <= radius) { //If the tile is within view distance, visible = line(x0, y0, x, y); //check if it can be seen. if (visible) { //If it can be seen, getTile(x,y)->setVisible(true); //Mark that tile as viewable } } } } } bool cMap::line(int x0,int y0,int x1,int y1) { bool steep = abs(y1-y0) > abs(x1-x0); if (steep) { swap(x0, y0); swap(x1, y1); } if (x0 > x1) { swap(x0,x1); swap(y0,y1); } int deltax = x1-x0; int deltay = abs(y1-y0); int error = deltax/2; int ystep; int y = y0; if (y0 < y1) ystep = 1; else ystep = -1; for (int x = x0; x < x1; x++) { if ( steep && getTile(y,x)->isBlocked()) { getTile(y,x)->setVisible(true); getTile(y,x)->setDiscovered(true); return false; } else if (!steep && getTile(x,y)->isBlocked()) { getTile(x,y)->setVisible(true); getTile(x,y)->setDiscovered(true); return false; } error -= deltay; if (error < 0) { y = y + ystep; error = error + deltax; } } return true; } If anyone could help me make the first blocked tiles visible but stops the rest, I would appreciate it. thanks, Manderin87

    Read the article

  • Changing encryption settings for Microsoft Office 2010/2013

    - by iridescent
    Although there are Office 2013 settings to change how encryption is performed, when you encrypt Open XML Format files (.docx, .xslx, .pptx, and so on) the default values — AES (Advanced Encryption Standard), 128-bit key length, SHA1, and CBC (cipher block chaining) — provide strong encryption and should be fine for most organizations. Quoted from http://technet.microsoft.com/en-us/library/cc179125.aspx . I can't figure out where is the setting to change how encryption is performed. Is there any possible to change the encryption algorithm being used instead of the default AES-128 ? Thanks.

    Read the article

  • if there are multiple kernel module can drive the same device, what is the rule to choose from them?

    - by Dyno Fu
    both pcnet32 and vmxnet can drive the device. $ lspci -k ... 02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10) Subsystem: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] Flags: bus master, medium devsel, latency 64, IRQ 19 I/O ports at 2000 [size=128] [virtual] Expansion ROM at dc400000 [disabled] [size=64K] Kernel driver in use: vmxnet Kernel modules: vmxnet, pcnet32 both kernel modules are loaded, $ lsmod | grep net pcnet32 32644 0 vmxnet 17696 0 mii 5212 1 pcnet32 as you see, kernel driver in use is vmxnet. is there any policy/algorithm in kernel how to choose from the candidates?

    Read the article

  • Service haproxy error

    - by user128296
    I want to configure Haproxy for outgoing mail load balancing. my configuration file /etc/haproxy.cfg is. global maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 4 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode tcp listen smtp_proxy 199.83.95.71:25 mode tcp option tcplog balance roundrobin # Load Balancing algorithm ## Define your servers to balance server r23.lbsmtp.org 74.117.x.x:25 weight 1 maxconn 512 check server r15.lbsmtp.org 199.71.x.x:25 weight 1 maxconn 512 check And when i start service haproxy i get this error. Starting HAproxy: [ALERT] 244/172148 (7354) : cannot bind socket for proxy smtp_proxy. Aborting. Please tell me where i am doing mistake.help will appreciated.

    Read the article

  • Reducing both pagfile.sys and hiberfil.sys in Windows 7

    - by greenber
    I recently used Defraggler to consolidate my free space areas on my D: drive preparatory to using Disk Manager to break my drive into two areas, one as my "data area" for Windows 7 (normally on my C: drive) and to experiment around with Windows 8. The Defraggler program works so well I ran it on my C: drive and I ended up with a lot of free space both on my C: drive and my D: drive. I was very happy. And then I woke up the next day and I've got virtually no free space left, something like 8 MB on my C: drive and about 3 GB on my D: drive. I then ran Wintree (which gives a nifty graphical representation of disk usage) and found I had a large page file and a large hiberfil. So I temporarily turned off hibernate and reduced the page file size to 2000megabytes and then rebooted so that both would take effect. It had no effect on the C: drive or the D: drive. That makes no sense to me. What caused the free space on each drive to disappear, why doesn't the page file size being reduced and the hibernate file being turned off free up disk space to either the C: or the D: drive? Would it make sense to delete the two files in question and, if so, how do I go about doing that? Safe mode? Thanks. Ross

    Read the article

  • Small office network setups

    - by user39822
    I work at a small office and we're overhauling our network setup there. We're a web dev company and at the moment we have 50+ production sites running on the same machine that runs our internal email, which is just plain stupid. We're moving all our client hosting off site and are now looking for something to run our internal office requirement. Below is a brain dump: Equal amount of Mac & PC, about 25 machines in total. We need a central "server" to host files that should be accessible everyone as a "network drive". If possible we'd like to use low cost hardware for this (Mac or Win based). Disk space should be upward of 1TB. Ideally we should also be able to run a small web server on this machine (LAMP stack) to run some planning and billing applications we wrote ourselves. We need some sort of MS Exchange alternative for things like a shared calendar and especially being able to set Out of Office replies. We have one printer that is connected to the network Setup should be something can preferably be managed easily via a graphical interface and NOT require command line skills. Users want to keep using Apple Mail or MS Outlook After a quick google I came across the Zimbra collaboration suite, can anyone recommend this or any other solution for our office?

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • Is having a [high-end] video card important on a server?

    - by Patrick
    My application is quite interactive application with lots of colors and drag-and-drop functionality, but no fancy 3D-stuff or animations or video, so I only used plain GDI (no GDI Plus, No DirectX). In the past my applications ran in desktops or laptops, and I suggested my customers to invest in a decent video card, with: a minimum resolution of 1280x1024 a minimum color depth of 24 pixels X Megabytes of memory on the video card Now my users are switching more and more to terminal servers, therefore my question: What is the importance of a video card on a terminal server? Is a video card needed anyway on the terminal server? If it is, is the resolution of the remote desktop client limited to the resolutions supported by the video card on the server? Can the choice of a video card in the server influence the performance of the applications running on the terminal server (but shown on a desktop PC)? If I start to make use of graphical libraries (like Qt) or things like DirectX, will this then have an influence on the choice of video card on the terminal server? Are calculations in that case 'offloaded' to the video card? Even on the terminal server? Thanks.

    Read the article

  • Windows 7 shows a drive as full in summary but files shown on drive are very small

    - by Rob
    I have a drive partitioned so it is seen by Windows as 2 drives: C:\ and D:\ Windows 7 shows D:\ as full up in the graphical summary in 'My Computer' summary of all the drives, e.g. the bar graph indicates full and nearly all of the drive's capacity, 108Gb, is full. So I go into the D:\ drive to look at the files, I see several folders. I select them all and the right-click menu Properties to count their size, expecting the value to be about the same as what Windows reports in the summary, i.e. nearly 108Gb. But the properties window shows the files are very small, Kbs and Mbs, nowhere near 108Gbs. One of the folders is a backup, but its size is very small. I've checked the folder options to show all system files and hidden files too - and counted these in the properties. Something invisible is holding the space. What is happening here? I'm afraid to delete anything if it removes valuable backups. Have I got huge backups here? Why can't I see them? How do I see them?

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

  • Reshape linux md raid5 that is already being reshaped?

    - by smammy
    I just converted my RAID1 array to a RAID5 array and added a third disk to it. I'd like to add a fourth disk without waiting fourteen hours for the first reshape to complete. I just did this: mdadm /dev/md0 --add /dev/sdf1 mdadm --grow /dev/md0 --raid-devices=3 --backup-file=/root/md0_n3.bak The entry in /proc/mdstat looks like this: md0 : active raid5 sdf1[2] sda1[0] sdb1[1] 976759936 blocks super 0.91 level 5, 64k chunk, algorithm 2 [3/3] [UUU] [>....................] reshape = 1.8% (18162944/976759936) finish=834.3min speed=19132K/sec Now I'd like to do this: mdadm /dev/md0 --add /dev/sdd1 mdadm --grow /dev/md0 --raid-devices=4 --backup-file=/root/md8_n4.bak Is this safe, or do I have to wait for the first reshape operation to complete? P.S.: I know I should have added both disks first, and then reshaped from 2 to 4 devices, but it's a little late for that.

    Read the article

  • How secure is Microsoft 2007's encryption?

    - by ericl42
    I've read some various articles about Microsoft's encryption, and from what I gather, 2007 is secure using all default options due to it using AES, and 2000 and 2003 can be configured secure by changing the default algorithm to AES. I was wondering if anyone else has read any other articles or know of any specific vulnerabilities involved with how they implement the encryption. I would like to be able to tell users that they can use this to send semi sensitive documents as long as they use AES and a strong password. Thanks for the information.

    Read the article

  • Why do moving lines become fuzzy on my monitor?

    - by CodeInChaos
    I recently got a new notebook. With moving images there are some graphical issues, and I'd like to know what causes them. None of my earlier monitors exhibited similar issues. Moving high contrast lines become jagged, similar to interleaved videos. When moving a horizontal line vertically those artifacts are colored, when moving a vertical line horizontally they aren't colored. The effect isn't observable in static images. And when moving faster the zone in which it occurs becomes wider. The effect is very visible if I move a window around on the borders of the window and wherever high contrast lines appear. But it appears when watching videos too. The vertical line in that image moves to the right, the horizontal line upwards. The effect is most likely related to the fact that each real pixel consists of different sub-pixels for the different color channels. But how are these causing the observed effect? Is the change at which the different colors change to the destination brightness different? The optical impression is that every second pixel in a chess board like arrangement is adapting slower than it's neighbors. But that doesn't make much sense.

    Read the article

  • linux/shell: change a file's modify timestamp relatively?

    - by index
    My Canon camera produces files like IMG_1234.JPG and MVI_1234.AVI. It also timestamps those files. Unfortunately during a trip to another timezone several cameras were used, one of which did not have the correct time zone set - meta data mess.. Now I would like to correct this (not EXIF, the file's "modify" timestamp on disk). Proposed algorithm: 1 read file's modify date 2 add delta, i.e. hhmmss (preferred: change timezone) 3 write new timestamp Unless someone knows a tool or a combination of tools that do the trick directly, maybe one could simplify the calculation using epoch time (seconds since ..) and whip up a shell script. Any help appreciated!

    Read the article

  • How to connect to DB2 when the password ends with '!' in Windows

    - by AngocA
    I am facing a problem to use the DB2 tools when using generic account with a generated password which ends with the Bang sign '!' to connect to DB2 database. I am not allowed to change the password because it is already used by other processes. I know the user is valid and I can connect to the database with its credentials, but not from all db2 tools. When using the Control Center it is okay. When using the Command Editor (GUI) or the Command Windows, I got this error message: connect to WAREHOUS user administrator using ! SQL0104N An unexpected token "!" was found following "<identifier>". Expected tokens may include: "NEW". SQLSTATE=42601 Let's say that my password is: pass@! I am trying to use c:\>db2 connect to sample user administrator using "pass@!" or c:\>db2 connect to sample user administrator using pass@! And it both cases I got the same error message. I could change the way I connect but it is not useful for me, for example: c:\>db2 connect to sample user administrator Enter current password for administrator: But I cannot use it from a batch file easily. I would like to know how can I connect from the Command Editor, in order to use this user from the Graphical Tools. BTW, I know that the Control Center is deprecated.

    Read the article

  • Xubuntu stuck after login

    - by viraptor
    How can I debug an issue with Xubuntu 12.04 (fresh install) which just waits idle after a login for about 30 seconds? The login screen is delayed correctly. After login, I get my desktop background, but no panels or auto-starting apps. It doesn't seem to be an authentication/pam issue, because I can login without delay at the console while the graphical session is still stuck. There's no disk or cpu activity and no obvious respawning of any process when I look at htop. There's nothing obviously wrong in .xsession-errors. Most interesting errors: openConnection: connect: No such file or directory cannot connect to brltty at :0 WARNING: gnome-keyring:: couldn't connect to: /tmp/keyring-wFn4VR/pkcs11: No such file or directory ... (polkit-gnome-authentication-agent-1:2131): polkit-gnome-1-WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The nam e org.gnome.SessionManager was not provided by any .service files ** Message: applet now removed from the notification area ** Message: using fallback from indicator to GtkStatusIcon ... (xfce4-indicator-plugin:2176): libindicator-WARNING **: IndicatorObject class does not have an accessible description. ... (xfce4-indicator-plugin:2176): Indicator-Application-WARNING **: Unable to get application list: Operation was cancelled Bootchart seems to end before I login, so it's not that helpful. Where else can I look for information?

    Read the article

  • Strange performance behaviour for 64 bit modulo operation

    - by codymanix
    The last three of these method calls take approx. double the time than the first four. The only difference is that their arguments doesn't fit in integer anymore. But should this matter? The parameter is declared to be long, so it should use long for calculation anyway. Does the modulo operation use another algorithm for numbersmaxint? I am using amd athlon64 3200+, winxp sp3 and vs2008. Stopwatch sw = new Stopwatch(); TestLong(sw, int.MaxValue - 3l); TestLong(sw, int.MaxValue - 2l); TestLong(sw, int.MaxValue - 1l); TestLong(sw, int.MaxValue); TestLong(sw, int.MaxValue + 1l); TestLong(sw, int.MaxValue + 2l); TestLong(sw, int.MaxValue + 3l); Console.ReadLine(); static void TestLong(Stopwatch sw, long num) { long n = 0; sw.Reset(); sw.Start(); for (long i = 3; i < 20000000; i++) { n += num % i; } sw.Stop(); Console.WriteLine(sw.Elapsed); } EDIT: I now tried the same with C and the issue does not occur here, all modulo operations take the same time, in release and in debug mode with and without optimizations turned on: #include "stdafx.h" #include "time.h" #include "limits.h" static void TestLong(long long num) { long long n = 0; clock_t t = clock(); for (long long i = 3; i < 20000000LL*100; i++) { n += num % i; } printf("%d - %lld\n", clock()-t, n); } int main() { printf("%i %i %i %i\n\n", sizeof (int), sizeof(long), sizeof(long long), sizeof(void*)); TestLong(3); TestLong(10); TestLong(131); TestLong(INT_MAX - 1L); TestLong(UINT_MAX +1LL); TestLong(INT_MAX + 1LL); TestLong(LLONG_MAX-1LL); getchar(); return 0; } EDIT2: Thanks for the great suggestions. I found that both .net and c (in debug as well as in release mode) does't not use atomically cpu instructions to calculate the remainder but they call a function that does. In the c program I could get the name of it which is "_allrem". It also displayed full source comments for this file so I found the information that this algorithm special cases the 32bit divisors instead of dividends which was the case in the .net application. I also found out that the performance of the c program really is only affected by the value of the divisor but not the dividend. Another test showed that the performance of the remainder function in the .net program depends on both the dividend and divisor. BTW: Even simple additions of long long values are calculated by a consecutive add and adc instructions. So even if my processor calls itself 64bit, it really isn't :( EDIT3: I now ran the c app on a windows 7 x64 edition, compiled with visual studio 2010. The funny thing is, the performance behavior stays the same, although now (I checked the assembly source) true 64 bit instructions are used.

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >