Search Results

Search found 5267 results on 211 pages for 'use cases'.

Page 177/211 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • UPS vs Solar Power in case of power failure for a server [on hold]

    - by Zen 8000k
    I am looking for a low power, low end pc able to run 24/7 without overheating and a way to support it in case of power failure. Power failures can be up to 72 hours. The pc dosen't need a monitor or keyboard. A modem must also be protected in case of power failure. When i say low end, i don't mean crap. The cpu needs to be x86 and have at least 1k cpu in this chart: http://www.cpubenchmark.net/index.php What's the best way to do this? EDIT: more info. I need to run a home server. The server will perform light tasks mainly. A x86 cpu sadly is the only route for my use. I want to be able to run the server and the router/modem in case of power failure. Now, regarding how long the power will fail: 1) 1 hours is OK for most situations. (say 90%) 2) 3 hours is OK (say 98%) 3) 6 hours is more thank OK. (say 99.5%) 4) On extreme cases the power might fail days. I believe this is very unlikely to happen. More is great but, really, how ofter power will fail more than 3 hours? I believe once every year at best. Well, that's too rare to care about. Given the above, I am looking for a cost effective way to archive 1-3 hour power or 6 hour if possible. Solutions: You guys give me great ideas. 1) Power generator: no good as power will fail for 10 seconds before returning. Also I read online, "clean" power generators cost 1.5k+, so it's out of budged. Non clean generator might damage electronics, right? 2) Solar power: i don't know for sure about this. Sounds like a great idea, too good to be true, honestly. For only 200$ i get 100+w? What are the drawbacks here? 3) UPS: This seems to be the best. The only problem is the cost. Cost < 200$ = great 400$ = budged limit

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • Unable to burn Windows ISO from Fedora

    - by user331947
    First of all, English is not my native tongue, so apologies for any mistakes. My computer recently started prompting that it can't launch Windows successfully. So I just choose start Windows normally. Then, I found that the startup freezes at the Windows screen (before the login prompt). I have tried rebooting several times and get the same results. So I just gave up. After few days, I tried to boot up my laptop again. This time it got to the desktop, but it's extremely slow and the icons on my Desktop don't show up. I decided to format the Windows partition and reinstall a new one. (It is usually faster that way since I kept my 400GB+ data on aother partition and programs and the rest in the same partition as Windows). The thing is I get the Windows disc at the moment (Traveling aboard). But I have a Windows 7 disc image on my hard disk. So, I downloaded Ubuntu 14.04 LTS, made a Live USB, and then try to burn the image from Ubuntu. But the program just freezes and I don't know why. I tried several times and it's still the same. So I tried using Fedora instead, just to see if it will work. The Disk Image Writer report something like this. Error unmounting /dev/dm-0: Command-line `umount "/dev/dm-0"' exited with non-zero exit status 32: umount: /: target is busy (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1).) (udisks-error-quark, 14) Also, I tried installing linux on the windows partition. The installation program freezes (both Ubuntu and Fedora). So, I thought that maybe something are wrong with my hard disk. I seek the solution on the internet and found that gparted can be used to format a partition. And it also froze at "Searching /dev/sda/ partition ...". I'm using Lenovo Y570. Spec below. http://www.notebookreview.com/notebookreview/lenovo-ideapad-y570-review-a-lenovo-bestseller/3/ Can anyone suggest a next step in diagnosing and fixing this problem? Thanks in advance.

    Read the article

  • Asus M4A79XTD-EVO / AMD Phenom II X4 965 Crashes / BSOD / Hangs / Restarts

    - by Tiby
    I'll try to be as concise as possible because I have a lot to say about my problem, but I'd rather say it when asked or when I feel it's necessary, just to make this initial reading clearer. For about a year and a half I have periods when my system has all the problems in the title (I'll use the word 'crash' for either one). I'll list some patterns and what I tried to do and what were the results, but the list is not exclusive: usually it crashes when a CPU-intensive operation is in progress, like a game or video encoding or HD movie rendering, but also sometimes crashes when I'm doing nothing after a first crash the system is very unstable and sometimes it crashes even during POST, or doesn't boot at all Some months ago I went to a local service (one that you just put your computer on the table and sit there with a guy and trying to figure out the problem, very rare these days) and they used OCCT and it crashed every time he changed some part to test it out (PSU, RAM, video card, HDD). The last one was the CPU. They changed the CPU and it didn't crash any more. Then when they put my CPU back, it also didn't crash. We figured that the trouble was the thermal paste (probably some 2 years old) because it was the only thing changed while testing. Up until 2 weeks ago, I haven't had any more problems. 2 weeks ago the problems reappeared. I changed again the thermal paste, put some Arctic Silver 5, and for about a week everything worked perfect (tried some games, video encoding, no more crashes). But again it started crashing in the same fashion as the first time. But now, instead, I figured out a very odd behaviour: when I start some of the apps above, in most cases it crashes if I start OCCT and turn on the CPU test, and run any of the programs above, it doesn't crash, even if the CPU is on 100% load (and 65-70 degrees Celsius temperature) if I shut down OCCT and continue using the programs, it crashes in a very short period of time (even if the CPU is on 5-10% load and 40 degrees) There are so many patterns and temporary solutions that I figured out in this year and a half period of time, that I can't include them all because I don't know which one are more relevant, but I'll happily provide any details you ask. My system is: CPU: AMD Phenom II X4 965 (3400 MHz - 125W) MB: ASUS M4A79XTD - EVO RAM: Corsair Vengeance 8 GB (2 x 4GB) CL8 1600MHz Video: HIS Radeon HD5770 1GB PSU: Corsair 750W HDD: Western Digital 1TB OS: Win 7 Enterprise 64 BIT (also tried with Windows Server 2008 R2 Trial and Win XP)

    Read the article

  • Preventing 'Reply-All' to Exchange Distribution Groups

    - by Larold
    This is another question in a short series regarding a challenging Exchange project my co-workers have been asked to implement. (I'm helping even though I'm primarily a Unix guy because I volunteered to learn powershell and implement as much of the project in code as I could.) Background: We have been asked to create many distribution groups, say about 500+. These groups will contain two types of members. (Apologies if I get these terms wrong.) One type will be internal AD users, and the other type will be external users that I create Mail Contact entries for. We have been asked to make it so that a "Reply All" is not possible to any messages sent to these groups. I don't believe that is 100% possible to enforce for the following reasons. My question is - is my following reasoning sound? If not, please feel free to educate me on if / how things can properly be implemeneted. Thanks! My reasoning on why it's impossible to prevent 100% of potential reply-all actions: An interal AD user could put the DL in their To: field. They then click the '+' to expand the group. The group contains two external mail contacts. The message is sent to everyone, including those external contacts. External user #1 decides to reply-all, and his mail goes to, at least, external user #2, which wouldn't even involve our Exchange mail relays. An internal AD user could place the DL in their Outlook To: field, then click the '+' button to expand the DL. They then fire off an email to everyone that was in the group. (But the individual addresses are listed in the 'To:' field.) Because we now have a message sent to multiple recipients in the To: field, the addresses have been "exposed", and anyone is free to reply-all, and the messages just get sent to everyone in the To: field. Even if we try to set a Reply-To: field for all of these DLs, external mail clients are not obligated to abide by it, or force users to abide by it. Are my two points above valid? (I admit, they are somewhat similar.) Am I correct to tell our leadership "It is not possible to prevent 100% of the cases where someone will want to Reply-All to these groups UNLESS we train the users sending emails to these groups that the Bcc: field is to be used at all times." I am dying for any insight or parts of the equation I'm not seeing clearly. Thank you!!!

    Read the article

  • How to generate build no with SVN revision no & Maven buildNumber plugin

    - by Binit jha
    I am using mvn buildNumber plugin to generate build no with latest svn revision no. But, our version is not resolve to ${buildNumber} in the duration of installing in .m2 local reposotry. here is the our pom details: <modelVersion>4.0.0</modelVersion> <groupId>com.hp.cloudprint</groupId> <artifactId>testutils</artifactId> <name>testutils</name> <version>6.3.rel.${buildNumber}</version> <description>This jar contains some helper classes which can simplify the writing of JUnit test cases.</description> <dependencies> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>buildnumber-maven-plugin</artifactId> <executions> <execution> <id>useLastCommittedRevision</id> <phase>validate</phase> <goals> <goal>create</goal> </goals> </execution> </executions> <configuration> <doCheck>false</doCheck> <doUpdate>true</doUpdate> <getRevisionOnlyOnce>true</getRevisionOnlyOnce> </configuration> </plugin> <scm> <connection>scm:svn:https://acn-platform</connection> <developerConnection>scm:svn:https://abc-platform/trunk</developerConnection> </scm>. </project> Building jar: C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar [INFO] [install:install] [INFO] Installing C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar to C:\Documents and Settings\jhab.m2*\repository\com\hp\cloudprint\testutils\6.3.rel.${buildNumber}\testutils-6.3.rel.${buildNumber}.jar** [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL Target generated correct jar. testutil-6.3.rel.2297.jar* Thanks in advance Binit

    Read the article

  • Domain: Netlogon event sequence

    - by Bob
    I'm getting really confused, reading tutorials from SAMBA howto, which is hell of a mess. Could you write step-by-step, what events happen upon NetLogon? Or in particular, I can't get these things: I really can't get the mechanism of action of LDAP and its role. Should I think of Active Directory LDS as of its superset? What're the other roles of AD and why this term is nearly a synonym of term "domain"? What's the role of LDAP in the remote login sequence? Does it store roaming user profiles? Does it store anything else? How it is called (are there any upper-level or lower-level services that use it in the course of NetLogon)? How do I join a domain. On the client machine I just use the Domain Controller admin credentials, but how do I prepare the Domain Controller for a new machine to join it. What's that deal of Machine trust accounts? How it is used? Suppose, I've just configured a machine to join a domain, created its machine trust, added its data to the domain controller. How would that machine find WINS server to query it for Domain Controller NetBIOS name? Does any computer name, ending with <1C type, correspond to domain controller? In what cases Kerberos and LM/NTLM are used for authentication? Where are password hashes stored in, say, Windows2000 domain controller? Right in the registry? What is SAM - is it a service, responsible for authentication and sending/storing those passwords and accompanying information, such as groups policies etc.? Who calls it? Does it use Active Directory? What's the role of NetBIOS except by name service? Can you exemplify a scenario of its usage as a "datagram distribution service for connectionless communication" or "session service for connection-oriented communication"? (quoted taken from http://en.wikipedia.org/wiki/NetBIOS_Frames_protocol description of NetBIOS roles) Thanks and sorry for many questions.

    Read the article

  • Thinkpad speaker turns mute - Linux Codec issue?

    - by Curlew
    At some point a few days ago the speakers on my Lenovo Thinkpad T410 (Model number: 2537A11) suddenly stopped working randomly. This error happens every time I watch a video or listen to a music file. The sound just abruptly stops. At the moment, I can't produce a single sound no matter what I do. I am using Debian GNU/Linux on this laptop and there doesn't appear to be anything else wrong (the fan is working, no abnormal heat (staying around ~40°C), no other obvious errors or problems). Here is the output of a nice program someone pointed me to: martin@martin:~/Downloads$ sudo python run.py --monitor Using temporary directory: /dev/shm/hda-analyzer You may remove this directory when finished or if you like to download the most recent copy of hda-analyzer tool. Downloading file hda_analyzer.py Downloading file hda_guilib.py Downloading file hda_codec.py Downloading file hda_proc.py Downloading file hda_graph.py Downloading file hda_mixer.py Downloaded all files, executing hda_analyzer.py Watching 1 cards ====================================== Sound is working normally and then it stops and the following lines appear: Diff for codec 0/0 (0x14f15069): --- +++ @@ -164,17 +164,17 @@ Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Pincap 0x00000010: OUT Pin Default 0x901701f0: [Fixed] Speaker at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT - Power: setting=D0, actual=D0 + Power: setting=D3, actual=D3 Connection: 2 0x10* 0x11 Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE And now there is also an error in the dmesg output hda-intel: IRQ timing workaround is activated for card #0. Suggest a bigger bdl_pos_adj. I changed the bdl_pos_adj to various numbers (-1, 0, 64, 1024) and either there is no change at all or dmesg reports that the adjustment is too big. I wonder if this bdl_pos_adj is the real reason for the error. Here is my hardware information provided by alsa-info.sh website. Okay, i did some serious testing and even installed Windows and now i officially conclude that this is a hard-ware related issue with my Laptop speakers. Reason: The error occurs in my installed Debian Linux, an Ubuntu Live distribution and Windows XP No error-message appears in all of the OS. The sound just keeps running and i can't hear a thing. I tested different setups, including OSS, ALSA and the pulseaudio server on top If i use my new usb-headphones i can hear sound all the time without any sudden silences. So obviously, although hard to believe, my laptop speakers are not okay (never heard of similar cases). I'll award the bounty to anyone who can point me to good tutorials or the procedure how to exchange my T410 speakers (i still have warranty. The laptop was bought in Germany, but now i am in Denmark). Or to someone who can explain me the output from hda-analyzer (big log above).

    Read the article

  • How to generate build no with SVN revision no & Maven buildNumber plugin.

    - by Binit jha
    Hi, I am using mvn buildNumber plugin to generate build no with latest svn revision no. But, our version is not resolve to ${buildNumber} in the duration of installing in .m2 local reposotry. here is the our pom details: <modelVersion>4.0.0</modelVersion> <groupId>com.hp.cloudprint</groupId> <artifactId>testutils</artifactId> <name>testutils</name> <version>6.3.rel.${buildNumber}</version> <description>This jar contains some helper classes which can simplify the writing of JUnit test cases.</description> <dependencies> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>buildnumber-maven-plugin</artifactId> <executions> <execution> <id>useLastCommittedRevision</id> <phase>validate</phase> <goals> <goal>create</goal> </goals> </execution> </executions> <configuration> <doCheck>false</doCheck> <doUpdate>true</doUpdate> <getRevisionOnlyOnce>true</getRevisionOnlyOnce> </configuration> </plugin> <scm> <connection>scm:svn:https://acn-platform</connection> <developerConnection>scm:svn:https://abc-platform/trunk</developerConnection> </scm>. </project> Building jar: C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar [INFO] [install:install] [INFO] Installing C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar to C:\Documents and Settings\jhab.m2*\repository\com\hp\cloudprint\testutils\6.3.rel.${buildNumber}\testutils-6.3.rel.${buildNumber}.jar** [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL Target generated correct jar. testutil-6.3.rel.2297.jar* Thanks in advance Binit

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • 40k Event Log Errors an hour Unknown Username or bad password

    - by ErocM
    I am getting about 200k of these an hour: An account failed to log on. Subject: Security ID: SYSTEM Account Name: TGSERVER$ Account Domain: WORKGROUP Logon ID: 0x3e7 Logon Type: 4 Account For Which Logon Failed: Security ID: NULL SID Account Name: administrator Account Domain: TGSERVER Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x334 Caller Process Name: C:\Windows\System32\svchost.exe Network Information: Workstation Name: TGSERVER Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Advapi Authentication Package: Negotiate Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. On my server... I changed my adminstrative username to something else and since then I've been inidated with these messages. I found on http://technet.microsoft.com/en-us/library/cc787567(v=WS.10).aspx that the 4 means "Batch logon type is used by batch servers, where processes may be executing on behalf of a user without their direct intervention." which really doesn't shed any light on it for me. I checked the services and they are all logging in as local system or network service. Nothing for administrator. Anyone have any idea how I tell where these are coming from? I would assume this is a program that is crapping out... Thanks in advance!

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • General guidelines / workflow to convert or transfer video "professionally"?

    - by cloneman
    I'm an IT "professional" who sometimes has to deal with small video conversion / video cutting projects, and I'd like to learn "the right way" to do this. Every time I search Google, there's always a disaster for weird, low-maturity trialware, or random forums threads from 3-4 years ago indicating various antiquated method to do it. The big question is the following: What are the "general" guidelines and tools to transcode video into some efficient (lossless?) intermediary, for editing purposes, for the purpose of eventually re-encoding it after? It seems to me like even the simplest of formats and tasks are a disaster of endless trial & error, or expertise only known by hardened experts who have a swiss army kife of weird conversion tools that they use, almost as if mounting an attack against the project. Here are a few cases in point: Simple VOB files extracted from DVD footage can't be imported into Adobe Premiere directly. Virtualdub is an old software people keep recommending but doesn't seem to support newer formats. I don't even know how to tell with certainty which codecs a video has, and weather the image is interlaced or not, and what resolution and codecs I'm dealing with. Problems: Choosing a wrong interlace option which diminishes quality Choosing a wrong pixel aspect ratio (stretches the image) Choosing a wrong "project type" in Premiere causing footage to require scaling Being forced to use some weird program that will have any number of negative effects What I'm looking for: Books or "Real knowledge" on format conversions, recognized tools, etc. that aren't some random forum guides on how to deal with video formats. Workflow guidelines on identifying a format going from one format to another without problems as mentioned above. Documentation on what programs like Adobe Premiere can and can't do with regards to formats, so that I don't use a wrench as a hammer. TL;DR How should you convert or "prepare" a video file to ensure it will be supported by Premiere for editing? Is premiere a suitable program to handle cropping, encoding, or should other tools be used for this, when making a video montage from a variety of source formats? What are some good books to read that specifically deal with converting videos that use any number of codecs?

    Read the article

  • I can't delete a directory inside a junctioned directory

    - by Fredy Muñoz
    So this is the deal. A couple of days ago I moved my profile folder C:\Documents and Settings\fmunoz to a different drive D:\fmunoz. Today, I created a directory in my desktop using the point-and-click method: Right-click on an empty space in the desktop Select New Select Folder Leave the default name New Folder and press Enter I tried to delete the folder using the point-and-click method: Right-click the New Folder directory Select Delete After five seconds, I got the following message: --------------------------- Error Deleting File or Folder --------------------------- Cannot delete New Folder: Access is denied. Make sure the disk is not full or write-protected and that the file is not currently in use. --------------------------- Initially I thought that there must be some sort of indexing services locking the directory so I got a list of open files using the TuneUp Process Manager tool but the New Folder directory wasn't there. I double-clicked My Computer, navigated to the desktop directory C:\Documents and Settings\fmunoz\Destkop, tried to delete the New Folder directory using the same point-and-click method described above and got exactly the same message at the same amount of time. In the same window, I navigated to the actual location of the desktop directory D:\fmunoz\Desktop, tried to delete the New Folder directory and this time it worked. I thought that this behavior was due to some special treatment that Windows gives to the desktop or the profile directories so I tried doing the same thing with a different set of directories: Created a folder D:\dummy Created a junction C:\dummy pointing to D:\dummy Created a New Folder directory in C:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I tried creating the folder in the actual directory rather than the junction directory: Created a New Folder directory in D:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I also tried using the Delete button instead of using the Delete option of the context menu but it didn't work. When using the Shift+Delete sequence, it works. It also works by using the rd command in the console, but in both cases the deleted directory doesn't goes to the Recycle Bin, which is my intention when using the Delete context menu option or the Delete button.

    Read the article

  • Network speeds being report as 4x higher than actual in Windows 7 SP1

    - by Synetech
    Ever since installing Windows 7 SP1, I have noticed that all programs that display my network transfer rate have been exactly 4x higher than they actually are. For example, when I download something from a high-bandwidth web site or through torrents with lots of sources, the download rate indicated is is ~5MBps (~40Mbps) even though my Internet connection has a maximum of only 1.5MBps (12Mbps). It is the same situation with the upstream bandwidth: the connection maximum is 64KBps, but I’m seeing up to 256KBps. I have tried several different programs for monitoring bandwidth throughput and they all give the same results. I also tried different times and different days, and they always show the rate as being four times too high. My initial thought was that my ISP had increased the speeds (without my noticing), which they have done before. However, I checked my ISP’s site and they have not increased the speeds. Moreover, when I look at the speeds in the program actually doing the transfer (eg Chrome, µTorrent, etc.), the numbers are in line with the expected values at the same time that bandwidth monitoring programs are showing the high numbers. The only significant change (and pretty much the only change at all) that has occurred to my system since the change was the installation of SP1 for Windows 7. As such, it is my belief that some sort of change exists in SP1 whereby software that accesses the bandwidth via a specific API receives (erroneously?) high numbers while others that have access to the raw data continue to receive the correct values. I booted into Windows XP and downloaded some things via HTTP and torrent and in both cases, the numbers were as expected (like they were in Windows 7 before installing SP1). I then booted back into 7SP1 and once again, the numbers were four times higher than possible. Therefore it is definitely something in SP1 that has changed how local bandwidth is calculated/returned. There is definitely something wonky with Windows 7 SP1’s network speed calculation. I tried Googling this, but (for multiple reasons), have had a difficult time finding anything relevant. Has anybody else noticed this behavior? Does anybody know of any bugs or changes in SP1 that could account for it?

    Read the article

  • Concerning persistence size in the Linux Live Creator

    - by user63085
    Message : Hello everyone! I have ,for the last several months, used the Linux Live USB Creator which it is a very useful app to make portable OS on to flash drives. I mostly use this application to test and try out new OS's as they are released, before I decide to make a hard disk installatio on to the computer. In many cases, the application developers will allow the “persistence” feature in the flash-drive-installed OS, which is just another way of saying that after multiple boot-ups and shutdowns, all the changes made to the OS will be saved in the flash-drive. But I have a question about the limit of the Persistence size in Linux Live USB Creator (currently version 2.6). I install Super OS 10 on to a partition on my external drive which has 30 GB. I wanted to reserve 10 GB for the persistence so that I can install more applications and space will not run out as I update the installed applications or when I do system updates. But why is it that only 3950 MB can be put for persistence? It would be great if, when desired, as much more persistence space could be set aside so that the space will not run out soon. Also, as I have installed the OS on a 30 GB drive, I tried to see how much space is left. But it seems only the remaining of the Persistence space is displayed when I click on the File System folder. For example, after I have just installed it now, there is 3.5 GB of free space. Where can I access the remaining 26 GB or so drive space which is in the same drive? How do I access it Sir?? It would be helpful if any one could explain and help me with this. Most importantly, it would be a big relief if the persistence can be somehow expanded by a work-around so that I can continue using my SuperOS 10.04 (now heavily customized) OS, which unfortunately has just over 576 MB of space left now, after I removed OpenOffice.org and installed the Libre Office earlier today. This is what remains from the maximum allowable 3950 MB of space for persistence at set-up. Thanks in advance!

    Read the article

  • Is the guideline: don't open email attachments or execute downloads or run plug-ins (Flash, Java) from untrusted sites enough to avert infection?

    - by therobyouknow
    I'd like to know if the following is enough to avert malware as I feel that the press and other advisory resources aren't always precisely clear on all the methods as to how PCs get infected. To my mind, the key step to getting infected is a conscious choice by the user to run an executable attachment from an email or download, but also viewing content that requires a plug-in (Flash, Java or something else). This conscious step breaks down into the following possibilities: don't open email attachments: certainly agree with this. But lets try to be clear: email comes in 2 parts -the text and the attachment. Just reading the email should not be risky, right? But opening (i.e. running) email attachments IS risky (malware can be present in the attachment) don't execute downloads (e.g. from sites linked from in suspect emails or otherwise): again certainly agree with this (malware can be present in the executable). Usually the user has to voluntary click to download, or at least click to run the executable. Question: has there ever been a case where a user has visited a site and a download has completed on its own and run on its own? don't run content requiring plug-ins: certainly agree: malware can be present in the executable. I vaguely recall cases with Flash but know of the Java-based vulnerabilities much better. Now, is the above enough? Note that I'm much more cautious than this. What I'm concerned about is that the media is not always very clear about how the malware infection occurs. They talk of "booby-trapped sites", "browser attacks" - HOW exactly? I'd presume the other threat would be malevolent use of Javascript to make an executable run on the user's machine. Would I be right and are there details I can read up on about this. Generally I like Javascript as a developer, please note. An accepted answer would fill in any holes I've missed here so we have a complete general view of what the threats are (even though the actual specific details of new threats vary, but the general vectors are known).

    Read the article

  • Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider

    - by Gopinath
    If you going to buy iPhone 4S on a two year contract in USA, Europe or Australia you may not find it expensive. But if you are planning to buy it in any other parts of the world, you will definitely feel the heat of ridiculous iPhone 4S price. In India iPhone 4S costs approximately costs $1000 which is 30% more than the price tag of an unlocked iPhone sold in USA. Personally I love iPhones as there is no match for the user experience provided by Apple as well as the wide range of really meaning applications available for iPhone. But it breaks heart to spend $1000 for a phone and I’m forced to look at alternates available in the market. Here are the four iPhone 4S alternates available in almost all the countries where we can buy iPhone 4S Google Galaxy Nexus The Galaxy Nexus is Google’s own Android smartphone manufactured by Samsung and sold under the brand name of Google Nexus. Galaxy Nexus is the pure Android phone available in the market without any bloat software or custom user interfaces like other Androids available in the market. Galaxy Nexus is also the first Android phone to be shipped with the latest version of Android OS, Ice Cream Sandwich. This phone is the benchmark for the rest of Android phones that are going to enter the market soon. In the words of Google this smartphone is called as “Galaxy Nexus: Simple. Beautiful. Beyond Smart.”.  BGR review summarizes the phone as This is almost comical at this point, but the Samsung Galaxy Nexus is my favourite Android device in the world. Easily replacing the HTC Rezound, the Motorola DROID RAZR, and Samsung Galaxy S II, the Galaxy Nexus champions in a brand new version of Android that pushes itself further than almost any other mobile OS in the industry. Samsung Galaxy S II The one single company that is able to sell more smartphones than Apple is Samsung. Samsung recently displaced Apple from the top smartphone seller spot and occupied it with loads of pride. Samsung’s Galaxy S II fits as one the best alternatives to Apple’s iPhone 4S with it’s beautiful design and remarkable performance. Engadget summarizes Samsung Galaxy S2 review as It’s the best Android smartphone yet, but more importantly, it might well be the best smartphone, period. Of course, a 4.3-inch screen size won’t suit everyone, no matter how stupendously thin the device that carries it may be, and we also can’t say for sure that the Galaxy S II would justify a long-term iOS user foresaking his investment into one ecosystem and making the leap to another. Nonetheless, if you’re asking us what smartphone to buy today, unconstrained by such externalities, the Galaxy S II would be the clear choice. Sometimes it’s just as simple as that. Nokia Lumia 800 Here comes unexpected Windows Phone in to the boxing ring. May be they are not as great as Androids available in the market today, but they are picking up very quickly. Especially the Nokia Lumia 800 seems to be first ever Windows Phone 7 aimed at competing serious with Androids and iPhones available in the market. There are reports that Nokia Lumia 800 is outselling all Androids in UK and few high profile tech blogs are calling it as the king of Windows Phone. Considering this phone while evaluating the alternative of iPhone 4S will not disappoint you. We assure. Droid RAZR Remember the Motorola Driod that swept entire Android market share couple of years ago? The first two version of Motorola Droids were the best in the market and they out performed almost every other Android phone those days. The invasion of Samsung Androids, Motorola lost it charm. With the recent release of Droid RAZR, Motorola seems to be in the right direction to reclaiming the prestige. Droid RAZR is the thinnest smartphone available in the market and it’s beauty is not just skin deep. Here is a review of the phone from Engadget blog the RAZR’s beauty is not only skin deep. The LTE radio, 1.2GHz dual-core processor and 1GB of RAM make sure this sleek number is ready to run with the big boys. It kept pace with, and in some cases clearly outclassed its high-end competition. Despite its deficiencies in the display department and underwhelming battery life, the RAZR looks to be a perfectly viable alternative when considering the similarly-pricey Rezound and Galaxy Nexus Further Reading So we have seen the four alternates of iPhone 4S available in the market and I personally love to buy a Samsung smartphone if I’m don’t have money to afford an iPhone 4S. If you are interested in deep diving into the alternates, here few links that help you do more research Apple iPhone 4S vs. Samsung Galaxy Nexus vs. Motorola Droid RAZR: How Their Specs Compare by Huffington Post Nokia Lumia 800 vs. iPhone 4S vs. Nexus Galaxy: Spec Smackdown by PC World Browser Speed Test: Nokia Lumia 800 vs. iPhone 4S vs. Samsung Galaxy S II – by Gizmodo iPhone 4S vs Samsung Galaxy S II by pocket lint Apple iPhone 4S vs. Samsung Galaxy S II by techie buzz This article titled,Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Parallelism in .NET – Part 12, More on Task Decomposition

    - by Reed
    Many tasks can be decomposed using a Data Decomposition approach, but often, this is not appropriate.  Frequently, decomposing the problem into distinctive tasks that must be performed is a more natural abstraction. However, as I mentioned in Part 1, Task Decomposition tends to be a bit more difficult than data decomposition, and can require a bit more effort.  Before we being parallelizing our algorithm based on the tasks being performed, we need to decompose our problem, and take special care of certain considerations such as ordering and grouping of tasks. Up to this point in this series, I’ve focused on parallelization techniques which are most appropriate when a problem space can be decomposed by data.  Using PLINQ and the Parallel class, I’ve shown how problem spaces where there is a collection of data, and each element needs to be processed, can potentially be parallelized. However, there are many other routines where this is not appropriate.  Often, instead of working on a collection of data, there is a single piece of data which must be processed using an algorithm or series of algorithms.  Here, there is no collection of data, but there may still be opportunities for parallelism. As I mentioned before, in cases like this, the approach is to look at your overall routine, and decompose your problem space based on tasks.  The idea here is to look for discrete “tasks,” individual pieces of work which can be conceptually thought of as a single operation. Let’s revisit the example I used in Part 1, an application startup path.  Say we want our program, at startup, to do a bunch of individual actions, or “tasks”.  The following is our list of duties we must perform right at startup: Display a splash screen Request a license from our license manager Check for an update to the software from our web server If an update is available, download it Setup our menu structure based on our current license Open and display our main, welcome Window Hide the splash screen The first step in Task Decomposition is breaking up the problem space into discrete tasks. This, naturally, can be abstracted as seven discrete tasks.  In the serial version of our program, if we were to diagram this, the general process would appear as: These tasks, obviously, provide some opportunities for parallelism.  Before we can parallelize this routine, we need to analyze these tasks, and find any dependencies between tasks.  In this case, our dependencies include: The splash screen must be displayed first, and as quickly as possible. We can’t download an update before we see whether one exists. Our menu structure depends on our license, so we must check for the license before setting up the menus. Since our welcome screen will notify the user of an update, we can’t show it until we’ve downloaded the update. Since our welcome screen includes menus that are customized based off the licensing, we can’t display it until we’ve received a license. We can’t hide the splash until our welcome screen is displayed. By listing our dependencies, we start to see the natural ordering that must occur for the tasks to be processed correctly. The second step in Task Decomposition is determining the dependencies between tasks, and ordering tasks based on their dependencies. Looking at these tasks, and looking at all the dependencies, we quickly see that even a simple decomposition such as this one can get quite complicated.  In order to simplify the problem of defining the dependencies, it’s often a useful practice to group our tasks into larger, discrete tasks.  The goal when grouping tasks is that you want to make each task “group” have as few dependencies as possible to other tasks or groups, and then work out the dependencies within that group.  Typically, this works best when any external dependency is based on the “last” task within the group when it’s ordered, although that is not a firm requirement.  This process is often called Grouping Tasks.  In our case, we can easily group together tasks, effectively turning this into four discrete task groups: 1. Show our splash screen – This needs to be left as its own task.  First, multiple things depend on this task, mainly because we want this to start before any other action, and start as quickly as possible. 2. Check for Update and Download the Update if it Exists - These two tasks logically group together.  We know we only download an update if the update exists, so that naturally follows.  This task has one dependency as an input, and other tasks only rely on the final task within this group. 3. Request a License, and then Setup the Menus – Here, we can group these two tasks together.  Although we mentioned that our welcome screen depends on the license returned, it also depends on setting up the menu, which is the final task here.  Setting up our menus cannot happen until after our license is requested.  By grouping these together, we further reduce our problem space. 4. Display welcome and hide splash - Finally, we can display our welcome window and hide our splash screen.  This task group depends on all three previous task groups – it cannot happen until all three of the previous groups have completed. By grouping the tasks together, we reduce our problem space, and can naturally see a pattern for how this process can be parallelized.  The diagram below shows one approach: The orange boxes show each task group, with each task represented within.  We can, now, effectively take these tasks, and run a large portion of this process in parallel, including the portions which may be the most time consuming.  We’ve now created two parallel paths which our process execution can follow, hopefully speeding up the application startup time dramatically. The main point to remember here is that, when decomposing your problem space by tasks, you need to: Define each discrete action as an individual Task Discover dependencies between your tasks Group tasks based on their dependencies Order the tasks and groups of tasks

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >