Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1556/1620 | < Previous Page | 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563  | Next Page >

  • What's New in Oracle VM VirtualBox 4.2?

    - by Fat Bloke
    A year is a long time in the IT industry. Since the last VirtualBox feature release, which was a little over a year ago, we've seen: new releases of cool new operating systems, such as Windows 8, ChromeOS, and Mountain Lion; we've seen a myriad of new Linux releases from big Enterprise class distributions like Oracle 6.3, to accessible desktop distros like Ubuntu 12.04 and Fedora 17; and we've also seen the spec of a typical PC or laptop double in power. All of these events have influenced our new VirtualBox version which we're releasing today. Here's how... Powerful hosts  One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's. Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends.  So we're pleased to unveil a more powerful VirtualBox Manager to address the needs of these users: VM Groups Groups allow you to organize your VM library in a sensible way, e.g.  by platform type, by project, by version, by whatever. To create groups you can drag one VM onto another or select one or more VM's and choose Machine...Group from the menu bar. You can expand and collapse groups to save screen real estate, and you can Enter and Leave a group (think iPad navigation here) by using the right and left arrow keys when groups are selected. But groups are more than passive folders, because you can now also perform operations on groups, rather than all the individual VMs. So if you have a multi-tiered solution you can start the whole stack up with just one click. Autostart Many VirtualBox users run dedicated services in their VMs, for example, running a Wiki. With these types of VM workloads, you really want the VM start up when the host machine boots up. So with 4.2 we've introduced a cross-platform Auto-start mechanism to allow you to treat VMs as host services. Headless VM Launching With VM's such as web servers, wikis, and other types of server-class workloads, the Console of the VM is pretty much redundant. For some time now VirtualBox has offered a separate launch mechanism for these VM's, namely the command-line interface commands VBoxHeadless or VBoxManage startvm ... --type headless commands. But with 4.2 we also allow you launch headless VMs from the Manager. Simply hold down Shift when launching the VM from the Manager.  It's that easy. But how do you stop a headless VM? Well, with 4.2 we allow you to Close the VM from the Manager. (BTW best to use the ACPI Shutdown method which allows the guest VM to close down gracefully.) Easy VM Creation For our expert users, the  New VM Wizard was a little tiresome, so now there's a faster 2-click VM creation mode. Just Hide the description when creating a new VM. Powerful VMs  As the hosts have become more powerful, so are the guests that are running inside them. Here are some of the 4.2 features to accommodate them: Virtual Network Interface Cards  With 4.2, it's now possible to create VMs with up to 36 NICs, when using the ICH9 chipset emulation. But with great power comes great responsibility (didn't Obi-Wan say something similar?), and so we have also introduced bandwidth limiting to prevent a rogue VM stealing the whole pipe. VLAN tagging Some of our users leverage VLANs extensively so we've enhanced the E1000 NICs to support this.  Processor Performance If you are running a CPU which supports Nested Paging (aka EPT in the Intel world) such as most of the Core i5 and i7 CPUs, or are running an AMD Bulldozer or later, you should see some performance improvements from our work with these processors. And while we're talking Processors, we've added support for some of the more modern VIA CPUs too. Powerful Automation Because VirtualBox runs atop a fully blown operating system, it makes sense to leverage the capabilities of the host to run scripts that can drive the guest VMs. Guest Automation was introduced in a prior release but with 4.2 we've revamped the APIs to allow a richer and more powerful set of operations to be executed by the guest. Check out the IGuest APIs in the VirtualBox Programming Guide and Reference (SDK). Powerful Platforms  All the hardcore engineering that has gone into 4.2 has been done for a purpose and that is to deliver a fast and powerful engine that can run almost any x86 OS because of the integrity of the virtualization. So we're pleased to add support for these platforms: Mac OS X "Mountain Lion"  Windows 8 Windows Server 2012 Ubuntu 12.04 (“Precise Pangolin”) Fedora 17 Oracle Linux 6.3  Here's the proof: We don't have time to go into the myriad of smaller improvements such as support for burning audio CDs from a guest, bi-directional clipboard control,  drag-and-drop of files into Linux guests, etc. so we'll leave that as an exercise for the user as soon as you've downloaded from the Oracle or community site and taken a peek at the User Guide. So all in all, a pretty solid release, one that we hope you'll enjoy discovering. - FB 

    Read the article

  • Removing file with strange characters in filename in OS X

    - by SiggyF
    After a memory error in my program, I am stuck with a file with a strange filename. It's proving quite resistant to all normal methods to remove files with strange names. The filename is: %8BUȅ҉%95d%F8%FF%FF\x0f%8E%8F%FD%FF%FF%8B%B5T%F8%FF%FF%8B%85\%F8%FF%FF\x03%85x%F8%FF%FF%8B%95D%F8%FF%FF%8B%BD%9C%F8%FF%FF%8D\x04%86%8B%B5@%F8%FF%FF%89%85%90%F8%FF%FF%8B%85X%F8%FF%FF\x03%85%9C%F8%FF%FF%C1%E7\x02%8B%8Dx I tried the following: rm * - "No such file or directory" rm -- filename - "No such file or directory" rm "filename" - "No such file or directory" ls -i to get the inode number - "No such file or directory" stat filename - "No such file or directory" zip the directory where the file is in - error occured while adding "" to the archive. delete directory in finder - error -43 in python: os.unlink(os.listdir(u'.')[0]) - OSError No such file or directory find . -type f -exec rm {} \; - "No such file or directory" checked for locks on the file with lsof - no locks All these attempts result in a file (long filename here) not found error, or error -43. Even the ls -i. I couldn't find anymore options, so before reformatting or repairing my filesystem (fsck might help) I thought maybe there is something I missed. I wrote this small c program to get the inode: #include <stdio.h> #include <stddef.h> #include <sys/types.h> int main(void) { DIR *dp; struct dirent *ep; dp = opendir ("./"); if (dp != NULL) { while (ep = readdir (dp)) { printf("d_ino=%ld, ", (unsigned long) ep->d_ino); printf("d_name=%s.\n", ep->d_name); } (void) closedir (dp); } else perror ("Couldn't open the directory"); return 0; } That works. I now have the inode, but the normal find -inum inode -exec rm '{}' \; doesn't work. I think I have to use the clri now.

    Read the article

  • Caching issue with Centos forwarding DNS server

    - by Paddington
    I installed a Forwarding DNS server on Centos 5.10 and it is resolving addresses e.g google.com. When I stopped named (service named stop) and tried to dig (dig @localhost A google.com) there was a failure to resolve the address. I checked and see the caching daemon nscd is running. Does this mean the server is not caching at all? How can I get it to cache? named.conf options { // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; // Put files that named is allowed to write in the data/ directory: listen-on port 53 {127.0.0.1; 10.0.0.4;}; directory "/var/named"; // the default dump-file "/var/named/chroot/var/named/data/cache_dump.db"; statistics-file "/var/named/chroot/var/named/data/named_stats.txt"; memstatistics-file "/var/named/chroot/var/named/data/named_mem_stats.txt"; // allow-query {localhost; 192.168.0.0/24; 10.0.0.0/8;}; recursion yes; //allow-query { localhost; 10.0.0.0/8;}; allow-query { localhost; any; }; allow-query-cache { localhost; any; }; forward only; forwarders {8.8.8.8; 8.8.4.4;}; dnssec-enable yes; // dnssec-lookaside auto; /* Path to ISC DLV key */ // bindkeys-file "/etc/named.iscdlv.key"; // managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; **

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • cakephp & nginx config/rewrite rules

    - by seanl
    Hi somebody please help me out, I've asked this at stackoverflow as well but not got much of a response and was debating whether it was programming or server related. I’m trying to setup a cakephp environment on a Centos server running Nginx with Fact CGI. I already have a wordpress site running on the server and a phpmyadmin site so I have PHP configured correctly. My problem is that I cannot get the rewrite rules setup correct in my vhost so that cake renders pages correctly i.e. with styling and so on. I’ve googled as much as possible and the main consensus from the sites like the one listed below is that I need to have the following rewrite rule in place location / { root /var/www/sites/somedomain.com/current; index index.php index.html; # If the file exists as a static file serve it # directly without running all # the other rewrite tests on it if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/(.+)$ /index.php?url=$1 last; break; } } http://blog.getintheloop.eu/2008/4/17/nginx-engine-x-rewrite-rules-for-cakephp problem is these rewrite assume you run cake directly out of the webroot which is not what I want to do. I have a standard setup for each site i.e. one folder per site containing the following folders log, backup, private and public. Public being where nginx is looking for its files to serve but I have cake installed in private with a symlink in public linking back to /private/cake/ this is my vhost server { listen 80; server_name app.domain.com; access_log /home/public_html/app.domain.com/log/access.log; error_log /home/public_html/app.domain.com/log/error.log; #configure Cake app to run in a sub-directory #Cake install is not in root, but elsewhere and configured #in APP/webroot/index.php** location /home/public_html/app.domain.com/private/cake { index index.php; if (!-e $request_filename) { rewrite ^/(.+)$ /home/public_html/app.domain.com/private/cake/$1 last; break; } } location /home/public_html/app.domain.com/private/cake/ { index index.php; if (!-e $request_filename) { rewrite ^/(.+)$ /home/public_html/app.domain.com/public/index.php?url=$1 last; break; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/public_html/app.domain.com/private/cake$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Now like I said I can see the main index.php of cake and have connected it to my DB but this page is without styling so before I proceed any further I would like to configure it correctly. What am I doing wrong………. Thanks seanl

    Read the article

  • I can't delete a directory inside a junctioned directory

    - by Fredy Muñoz
    So this is the deal. A couple of days ago I moved my profile folder C:\Documents and Settings\fmunoz to a different drive D:\fmunoz. Today, I created a directory in my desktop using the point-and-click method: Right-click on an empty space in the desktop Select New Select Folder Leave the default name New Folder and press Enter I tried to delete the folder using the point-and-click method: Right-click the New Folder directory Select Delete After five seconds, I got the following message: --------------------------- Error Deleting File or Folder --------------------------- Cannot delete New Folder: Access is denied. Make sure the disk is not full or write-protected and that the file is not currently in use. --------------------------- Initially I thought that there must be some sort of indexing services locking the directory so I got a list of open files using the TuneUp Process Manager tool but the New Folder directory wasn't there. I double-clicked My Computer, navigated to the desktop directory C:\Documents and Settings\fmunoz\Destkop, tried to delete the New Folder directory using the same point-and-click method described above and got exactly the same message at the same amount of time. In the same window, I navigated to the actual location of the desktop directory D:\fmunoz\Desktop, tried to delete the New Folder directory and this time it worked. I thought that this behavior was due to some special treatment that Windows gives to the desktop or the profile directories so I tried doing the same thing with a different set of directories: Created a folder D:\dummy Created a junction C:\dummy pointing to D:\dummy Created a New Folder directory in C:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I tried creating the folder in the actual directory rather than the junction directory: Created a New Folder directory in D:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I also tried using the Delete button instead of using the Delete option of the context menu but it didn't work. When using the Shift+Delete sequence, it works. It also works by using the rd command in the console, but in both cases the deleted directory doesn't goes to the Recycle Bin, which is my intention when using the Delete context menu option or the Delete button.

    Read the article

  • Reviews Cheyney Group Marketing: What accounting softwares are available in the market for small businesses?

    - by user225556
    Accounting is the language of business, and good accounting software can save you hundreds of hours at the business equivalent of Berlitz. There's no substitute for an accounting pro who knows the ins and outs of tax law, but today's desktop packages can help you with everything from routine bookkeeping to payroll, taxes, and planning. Each package also produces files that you can hand off to an accountant as needed. Small-business managers have more accounting software options than ever, including subscription Web-based options that don't require their users to install or update software. Many businesses, however--including those that need to track large inventories or client databases, and those that prefer not to entrust their data to the cloud--may be happier with a desktop tool. We looked at three general-purpose, small-business accounting packages: Acclivity AccountEdgePro 2012 (both the product and the company were previously called MYOB), Intuit QuickBooks Premier 2012, and Sage's Sage 50 Complete 2013 (the successor to Peachtree Complete). All three packages offer a solid array of tools for tracking income and expenses, invoicing, managing payroll, and creating reports. These full-featured and highly mature programs don't come cheap. Acclivity AccountEdge Pro, at $299, is the least expensive; and prices climb if you opt to use common time-saving add-ons such as payroll services, or if you add licenses for multiple user accounts. All three are solid on the basics, but they have distinct differences in style and focus. The more you know about your accounting requirements, the more closely you'll want to look at the software you're thinking of buying. Sage 50 Complete should appeal most to people who understand the fine points of accounting and can use the product's many customization features (especially for businesses that manage inventory). QuickBooks works hard to appeal to newbies who need only the basics and might be intimidated by the level of detail and technical language exposed in the other two packages. At the same time, it also has a slew of third-party add-ons that meet specific needs and greatly expand its capabilities. AccountEdge Pro balances accessibility with a strong feature set at an affordable price. It's especially suitable for businesses that need to provide simultaneous access to multiple users.

    Read the article

  • Linux software RAID6: 3 drives offline - how to force online?

    - by Ole Tange
    This is similar to 3 drives fell out of Raid6 mdadm - rebuilding? except that it is not due to a failing cable. Instead the 3rd drive fell offline during rebuild of another drive. The drive failed with: kernel: end_request: I/O error, dev sdc, sector 293732432 kernel: md/raid:md0: read error not correctable (sector 293734224 on sdc). After rebooting both these sectors and the sectors around them are fine. This leads me to believe the error is intermittent and thus the device simply took too long to error correct the sector and remap it. I expect that no data was written to the RAID after it failed. Therefore I hope that if I can kick the last failing device online that the RAID is fine and that the xfs_filesystem is OK, maybe with a few missing recent files. Taking a backup of the disks in the RAID takes 24 hours, so I would prefer that the solution works the first time. I have therefore set up a test scenario: export PRE=3 parallel dd if=/dev/zero of=/tmp/raid${PRE}{} bs=1k count=1000k ::: 1 2 3 4 5 parallel mknod /dev/loop${PRE}{} b 7 ${PRE}{} \; losetup /dev/loop${PRE}{} /tmp/raid${PRE}{} ::: 1 2 3 4 5 mdadm --create /dev/md$PRE -c 4096 --level=6 --raid-devices=5 /dev/loop${PRE}[12345] cat /proc/mdstat mkfs.xfs -f /dev/md$PRE mkdir -p /mnt/disk2 umount -l /mnt/disk2 mount /dev/md$PRE /mnt/disk2 seq 1000 | parallel -j1 mkdir -p /mnt/disk2/{}\;cp /bin/* /mnt/disk2/{}\;sleep 0.5 & mdadm --fail /dev/md$PRE /dev/loop${PRE}3 /dev/loop${PRE}4 cat /proc/mdstat # Assume reboot so no process is using the dir kill %1; sync & kill %1; sync & # Force fail one too many mdadm --fail /dev/md$PRE /dev/loop${PRE}1 parallel --tag -k mdadm -E ::: /dev/loop${PRE}? | grep Upda # loop 2,5 are newest. loop1 almost newest => force add loop1 Next step is to add loop1 back - and this is where I am stuck. After that do a xfs-consistency check. When that works, check that the solution also works on real devices (such a 4 USB sticks).

    Read the article

  • R12.0 Cash Management Consolidated Patch Collection (CPC) And R12.1 Cash Management Recommended Patch Collection (RPC)

    - by user793553
    If you have Oracle E-Business Suite's Cash Management (CE) application installed, you'll want to be sure to install the latest CPC (Consolidated Patch Collection) if you are using a R12.0 version of the apps, or the latest RPC (Recommended Patch Collection) for the R12.1 version of the apps. These collections give you all the fixes currently available for known issues in the specified versions of the application, including all of the latest Root Cause Analysis Fixes (RCAs)! What is an "RPC" (for R12.1 users)? Since the release of 12.1, a number of recommended patches for Oracle Cash Management have been made available as standalone patches to help address important business process issues. Adoption of these patches was highly recommended at the time, but not always implemented, so to further facilitate adoption of these patches, Oracle consolidated them into product-specific Recommended Patch Collections (RPCs) - a collection of recommended patches. They were created by Oracle Development with the following goals in mind: Stability: To address data integrity issues that have been identified by Oracle Development and Oracle Software Support as having the potential to interfere with the normal completion of important business processes (such as, period close, etc.). Root Cause Fixes (RCAs): To make available root cause fixes for known data integrity issues. Compact: To keep the file footprint as small as possible to help facilitate the install process and minimize testing. Granular: To compile the collection of patches based on functional areas, allowing a customer to apply multiple RPCs at once, or in phases (based on individual needs and goals). Where to start ALL R12 Cash Management users (R12.0 and R12.1 users) should start with the following Note on My Oracle Support (MOS): Doc ID 1367845.1: R12: Cash Management Recommended Patch Collections It's a great place for important implementation information about both sets of critical patch collections! For R12.1x users R12.1 users should also take a look at the documents below for even more information about the RPC for the R12.1.x versions of the Cash Management application, and other related available RPCs: Note Number  Title                                                                                                      1489997.1 Master Troubleshooting Guide for CE: Reconciliation & Clearing [VIDEO] 954704.1 EBS: R12.1 Oracle Financials Recommended Patch Collections (RPCs) 1316506.1 R12: Oracle CE: Upgrading from R11i to R12.1: Latest Recommended Patches Patch Wizard Utility While a patch may contain several hundred files, the impact on your system may actually be minimal. Patches contain hard prerequisites that are intended to make a patch work on a very low code baseline. The Patch Wizard Utility will give you a detailed analysis of the patch’s impact on your instance BEFORE it’s applied, so you’ll know exactly what to expect from the application. Please refer to Doc ID 976188.1 for more information on this important utility

    Read the article

  • ssh, "Last Login", `last` and OS X

    - by allentown
    I have hit the googles as much as I can on this, being specific to OS X, I am not finding an answer. Nothing is wrong, but curiosity levels are high. $ssh [email protected] Password: Last login: Wed Apr 7 21:28:03 2010 from my-laptop.local ^lonely tylenol^ Line 1 is my command line 2 is the shell asking for the password line 3 is where my question comes from line 4 comes out of /etc/motd I can find nothing in ~/ of an of the .bash* files that contains the string "Last Login", and would like to alter it. It performs some type of hostname lookup, which I can not determine. If I ssh to another host: $ssh [email protected] Last login: Wed Apr 7 21:14:51 2010 from 123-234-321-123-some.cal.isp.net.example hi there, you are on box 456 line 1 is my command line 2 is again, where my question comes from line 3 is from /etc/motd *The dash'd IP address is not reversed On this remote host, I have ~/.ssh and it's corresponding keys set up, so there was no password request Where is the "Last Login:" coming from, where does the date stamp come from, and most importantly, where does the hostname come from? While on [email protected] (box 456) $echo hostname remote.location.example456.com Or with dig, to make sure I have rDNS/PTR set up, for which I am not authoritative, but my ISP has correctly set... $dig -x 123.234.321.123 PTR remote.location.example456.com or $dig PTR 123.321.234.123.in-addr.arpa. +short remote.location.example456.com. my previous hostname used to be 123-234-321-123-some.cal.isp.net.example, which I set with hostname -s remote.location.example456.com, because it was obnoxious to see such a long name. That solves the value of $echo hostname which now returns remote.location.example456.com. Mac OS X, 10.6 is this case, does seem to honor: touch ~/.hushlogin If leave that file empty, I get nothing on the shell when I login. I want to know what controls the host resolution of the IP, and how it is all working. For example, running last reports a huge list of my logins, which have obtusely long hostnames, when they would be preferable to just be remote.location.example456.com. More confusing to me, reading the man page for wtmp and lastlog, it looks like lastlog is not used on OS X, /var/log/lastlog does not exist. Actually, none of these exist on 10.5 or 10.6: /var/run/utmp The utmp file. /var/log/wtmp The wtmp file. /var/log/lastlog The lastlog file. If I am to assume that the system is doing some kind of reverse lookup, I certainly do not know what it is, as it is not an accurate one.

    Read the article

  • If I were in a Silverlight focus group, here is ten things I would say.

    - by mbcrump
    Silverlight is a great product right off the shelf. I use it, love it and spend a lot of time helping the community understand it. This however, doesn’t mean that I don’t think that it can get better. If I were invited to a Microsoft Focus Group about Silverlight here is 10 things I would say:  We need more navigation templates. I’ve found (4) templates that Microsoft has released (Cosmo, Windows 7, Accent and JetPack). This number needs to be around 16. In order to get more people developing for Silverlight, we need to give them a variety of templates to get them off the ground quickly. Silverlight needs to ship with the next version of Windows. At least version 4 needs to be pre-installed on Windows going forward. It’s small, in its own sandbox and I cannot find a reason for it not to be included. Silverlight needs to run on more platforms.  iOS and Android are the key here. I think Microsoft should shoot for Android first since I believe Android will take the lead in the mobile market (at least for the short-term). It would also be great to see Microsoft use Silverlight as the focus on their new tablets / “AppleTV”. I would even invest in getting it working with Kinect. When creating a new project in Silverlight, we should have the option to create a Unit Test. Most Silverlight developers are not unit testing. If this is surprising to you then you need to get out and talk to more developers. I partially blame this on Microsoft. When you create a new ASP.NET MVC application, you simply put a check to create a Unit Test project. We need the same thing for Silverlight. We should steer the developer into the right direction. Design patterns such as MVVM need to be easier to implement in Silverlight solutions.  I’d go so far as to say that MVVM Light should ship with Visual Studio. With the project / item templates and code snippets, Laurent puts you into the right direction. This is the way that it should have been. Easy for the 9-5 developer to grasp. I believe the majority of developers use code behind because that’s what is in all the demos provided by Microsoft. They are not trying to write sucky code it is that they simply don’t know a better way.  The XAP Files should be obfuscated/unused references deleted by default when in “Release” mode. A better Silverlight experience starts with a smaller XAP file. The less that a user has to download is the better, even with the majority of people on broadband. I would also recommend built-in obfuscation by Microsoft. People are paranoid that they can rename the .zip and run it through reflector. Get rid of the boring install experiences. Here is a great write up on what I’m talking about. The default “Install Silverlight” and “Loading screens” suck. They suck bad. We need a choice of templates that a professional designer has created.  Silverlight needs to supports more image formats. For example: it would be great to use .gif’s without converting them to .png.    Switching between Blend 4 and VS2010 to develop a Silverlight application is a pain. Probably one of the biggest issues that I can’t think of a good solution for. It would be nice if VS2012 had the best of both worlds and you never have to leave VS. We need reporting controls with SSRS included with the Silverlight Toolkit. I can’t think of another control that we need built into the toolkit. It would also be helpful to have export to .xls, .pdf and .doc included with the control. I hope that this post will at least get a few people talking. Who knows, Microsoft could be working on these things right now. Thanks for reading!  Subscribe to my feed CodeProject

    Read the article

  • domain/IN: has no NS records

    - by thejartender
    I have set up a home web server using Ubuntu 12.10 and I can safely say that it works with regards to router forwarding and ports being found. I know this, because switched my hosting provider's VPS SOA record to use my ISP IP with an 'A' value and had my website running from home. This verified that my server was configured correctly so I started what I believe to be the final step in making my old desktop into a full DNS server. I found this tutorial that got me started My LAN network consists of the following: My router with a gateway of 10.0.0.zzz My server with an IP of 10.0.0.xxx A laptop with an IP of 10.0.0.yyy Step 1: I installed bind via sudo apt-get install bind9 Step2: I configured /etc/bind/named.conf.local with: zone "sognwebdesign.no" { type master; file "/etc/bind/zones/sognwebdesign.no.db"; }; zone "0.0.10.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.0.10.in-addr.arpa"; }; Step3: Updated /etc/bind/named.conf.options with two ISP DNS addresses Step 4: Updated /etc/resolv.confwith: nameserver 10.0.0.xxx search lan search sognwebdesign.no Step5: created a ``/etc/bind/zones directory Step6: Created /etc/bind/zones/sognwebdesign.no.dbwith: $TTL 3D @ IN SOA ns.sognwebdesign.no. admin.sognwebdesign.no. ( 2007062001 28800 3600 604800 38400 ); sognwebdesign.no. IN NS ns1.sognwebdesign.no. sognwebdesign.no. IN NS ns2.sognwebdesign.no. sognwebdesign.no. IN NS ns3.sognwebdesign.no. NS1 IN A 10.0.0.1 NS2 IN A 10.0.0.2 NS3 IN A 10.0.0.3 www IN A 10.0.0.4 yuccalaptop IN A 10.0.0.19 gw IN A 10.0.0.138 TXT "Network Gateway" Step 7: created/etc/bind/zones/rev.0.0.10.in-addr.arpawith: $TTL 3D @ IN SOA ns.sognwebdesign.no. admin.sognwebdesign.no. ( 2007062001 28800 604800 604800 86400 ); zzz IN PTR gw.sognwebdesign.no. 1 IN PTR ns1.sognwebdesign.no. 2 IN PTR ns2.sognwebdesign.no. 3 IN PTR ns3.sognwebdesign.no. yyy IN PTR yuccalaptop.sognwebdesign.no. I then restart bind and dig-x sognwebdesign.no and it works Lastly I perform named-checkzoneon each of my zone files, but me reverse zone fail fails with: sognwedesign.no/IN: has no NS records Can anyone explain what I am doing wrong here or assist me in getting this configured correctly?

    Read the article

  • Who could ask for more with LESS CSS? (Part 1 of 3&ndash;Features)

    - by ToStringTheory
    It wasn’t very long ago that I first began to get into CSS precompilers such as SASS (Syntactically Awesome Stylesheets) and LESS (The Dynamic Stylesheet Language) and I had been hooked on the idea since.  When I finally had a new project come up, I leapt at the opportunity to try out one of these languages. Introduction To be honest, I was hesitant at first to add either framework as I didn’t really know much more than what I had read on their homepages, and I didn’t like the idea of adding too much complexity to a project - I couldn’t guarantee I would be the only person to support it in the future. Thankfully, both of these languages just add things into CSS.  You don’t HAVE to know LESS or SASS to do anything, you can still do your old school CSS, and your output will be the same.  However, when you want to start doing more advanced things such as variables, mixins, and color functions, the functionality is all there for you to utilize. From what I had read, SASS has a few more features than LESS, which is why I initially tried to figure out how to incorporate it into a MVC 4 project. However, through my research, I couldn’t find a way to accomplish this without including some bit of the Ruby on Rails framework on the computer running it, and I hated the fact that I had to do that.  Besides SASS, there is little chance of me getting into the RoR framework, at least in the next couple years.  So in the end, I settled with using LESS. Features So, what can LESS (or SASS) do for you?  There are several reasons I have come to love it in the past few weeks. 1 – Constants Using LESS, you can finally declare a constant and use its value across an entire CSS file. The case that most people would be familiar with is colors.  Wanting to declare one or two color variables that comprise the theme of the site, and not have to retype out their specific hex code each time, but rather a variable name.  What’s great about this is that if you end up having to change it, you only have to change it in one place.  An important thing to note is that you aren’t limited to creating constants just for colors, but for strings and measurements as well. 2 – Inheritance This is a cool feature in my mind for simplicity and organization.  Both LESS and SASS allow you to place selectors within other selectors, and when it is compiled, the languages will break the rules out as necessary and keep the inheritance chain you created in the selectors. Example LESS Code: #header {   h1 {     font-size: 26px;     font-weight: bold;   }   p {     font-size: 12px;     a     {       text-decoration: none;       &:hover {         border-width: 1px       }     }   } } Example Compiled CSS: #header h1 {   font-size: 26px;   font-weight: bold; } #header p {   font-size: 12px; } #header p a {   text-decoration: none; } #header p a:hover {   border-width: 1px; } 3 - Mixins Mixins are where languages like this really shine.  The ability to mixin other definitions setup a parametric mixin.  There is really a lot of content in this area, so I would suggest looking at http://lesscss.org for more information.  One of the things I would suggest if you do begin to use LESS is to also grab the mixins.less file from the Twitter Bootstrap project.  This file already has a bunch of predefined mixins for things like border-radius with all of the browser specific prefixes.  This alone is of great use! 4 – Color Functions This is the last thing I wanted to point out as my final post in this series will be utilizing these functions in a more drawn out manner.  Both LESS and SASS provide functions for getting information from a color (R,G,B,H,S,L).  Using these, it is easy to define a primary color, and then darken or lighten it a little for your needs.  Example: Example LESS Code: @base-color: #111; @red:        #842210; #footer {   color: (@base-color + #003300);   border-left:  2px;   border-right: 2px;   border-color: desaturate(@red, 10%); } Example Compiled CSS: #footer {    color: #114411;    border-left:  2px;    border-right: 2px;    border-color: #7d2717; } I have found that these can be very useful and powerful when constructing a site theme. Conclusion I came across LESS and SASS when looking for the best way to implement some type of CSS variables for colors, because I hated having to do a Find and Replace in all of the files using the colors, and in some instances, you couldn’t just find/replace because of the color choices interfering with other colors (color to replace of #000, yet come colors existed like #0002bc).  So in many cases I would end up having to do a Find and manually check each one. In my next post, I am going to cover how I’ve come to set up these items and the structure for the items in the project, as well as the conventions that I have come to start using.  In the final post in the series, I will cover a neat little side project I built in LESS dealing with colors!

    Read the article

  • Why isn't passwordless ssh working?

    - by Nelson
    I have two Ubuntu Server machines sitting at home. One is 192.168.1.15 (we'll call this 15), and the other is 192.168.1.25 (we'll call this 25). For some reason, when I want to setup passwordless login from 15 to 25, it works like a champ. When I repeat the steps on 25, so that 25 can login without a password on 15, no dice. I have checked both sshd_config files. Both have: RSAAuthentication yes PubkeyAuthentication yes I have checked permissions on both servers: drwx------ 2 bion2 bion2 4096 Dec 4 12:51 .ssh -rw------- 1 bion2 bion2 398 Dec 4 13:10 authorized_keys On 25. drwx------ 2 shimdidly shimdidly 4096 Dec 4 19:15 .ssh -rw------- 1 shimdidly shimdidly 1018 Dec 4 18:54 authorized_keys On 15. I just don't understand when things would work one way and not the other. I know it's probably something obvious just staring me in the face, but for the life of me, I can't figure out what is going on. Here's what ssh -v says when I try to ssh from 25 to 15: ssh -v -p 51337 192.168.1.15 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 192.168.1.15 [192.168.1.15] port 51337. debug1: Connection established. debug1: identity file /home/shimdidly/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/shimdidly/.ssh/id_rsa-cert type -1 debug1: identity file /home/shimdidly/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: identity file /home/shimdidly/.ssh/id_dsa-cert type -1 debug1: identity file /home/shimdidly/.ssh/id_ecdsa type -1 debug1: identity file /home/shimdidly/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 54:5c:60:80:74:ab:ab:31:36:a1:d3:9b:db:31:2a:ee debug1: Host '[192.168.1.15]:51337' is known and matches the ECDSA host key. debug1: Found key in /home/shimdidly/.ssh/known_hosts:2 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/shimdidly/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Offering DSA public key: /home/shimdidly/.ssh/id_dsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/shimdidly/.ssh/id_ecdsa debug1: Next authentication method: password

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • Styles for XAML (Silverlight &amp; WPF)

    - by GeekAgilistMercenary
    This is a quick walk through of how to setup things for skinning within a XAML Application.  First thing, find the App.xaml file within the WPF or Silverlight Project. Within the App.xaml file set some default styles for your controls.  I set the following for a button, label, and border control for an application I am creating. Button Control <Style x:Key="ButtonStyle" TargetType="Button"> <Setter Property="FontFamily" Value="Arial" /> <Setter Property="FontWeight" Value="Bold" /> <Setter Property="FontSize" Value="14" /> <Setter Property="Width" Value="180" /> <Setter Property="Height" Value="Auto" /> <Setter Property="Margin" Value="8" /> <Setter Property="Padding" Value="8" /> <Setter Property="Foreground" Value="AliceBlue" /> <Setter Property="Background" > <Setter.Value> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="Black" Offset="0" /> <GradientStop Color="#FF5B5757" Offset="1" /> </LinearGradientBrush> </Setter.Value> </Setter> </Style> Label Control <Style x:Key="LabelStyle" TargetType="Label"> <Setter Property="Width" Value="Auto"/> <Setter Property="Height" Value="28" /> <Setter Property="Foreground" Value="Black"/> <Setter Property="Margin" Value="8"/> </Style> Border Control <Style x:Key="BorderStyle" TargetType="Border"> <Setter Property="BorderThickness" Value="4"/> <Setter Property="Width" Value="Auto"/> <Setter Property="Height" Value="Auto" /> <Setter Property="Margin" Value="0,8,0,0"/> <Setter Property="CornerRadius" Value="18"/> <Setter Property="BorderBrush"> <Setter.Value> <LinearGradientBrush EndPoint="1,0.5" StartPoint="0,0.5"> <GradientStop Color="CornflowerBlue" Offset="0" /> <GradientStop Color="White" Offset="1" /> </LinearGradientBrush> </Setter.Value> </Setter> </Style> These provide good examples of setting individual properties to a default, such as; <Setter Property="Width" Value="Auto"/> <Setter Property="Height" Value="Auto" /> Also for settings a more complex property, such as with a LinearGradientBrush; <Setter Property="BorderBrush"> <Setter.Value> <LinearGradientBrush EndPoint="1,0.5" StartPoint="0,0.5"> <GradientStop Color="CornflowerBlue" Offset="0" /> <GradientStop Color="White" Offset="1" /> </LinearGradientBrush> </Setter.Value> </Setter> These property setters should be located between the opening and closing <Application.Resources></Application.Resources> tags. <Application x:Class="ScorecardAndDashboard.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" StartupUri="MainWindow.xaml"> <Application.Resources> </Application.Resources> </Application> Now in the pages, user controls, or whatever you are marking up with XAML, for the Style Property just set a StaticResource such as shown below. <!-- Border Control --> <Border Name="borderPollingFrequency" Style="{StaticResource BorderStyle}"> <!-- Label Control --> <Label Content="Trigger Name:" Style="{StaticResource LabelStyle}"></Label> <!-- Button Control --> <Button Content="Save Schedule" Name="buttonSaveSchedule" Style="{StaticResource ButtonStyle}" HorizontalAlignment="Right"/> That's it.  Simple as that.  There are other ways to setup resource files that are separate from the App.xaml, but the App.xaml file is always a good quick place to start.  As moving the styles to a specific resource file later is a mere copy and paste. Original post is available along with other technical ramblings.

    Read the article

  • Nginx SSL redirect for one specific page only

    - by jjiceman
    I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration: ... ssl_certificate example.net.crt; ssl_certificate_key example.key; server { listen 80 default; listen 443 ssl; server_name www.example.net example.net; access_log /srv/www/example.net/logs/access.log; error_log /srv/www/example.net/logs/error.log; root /srv/www/example.net/public_html; index index.php index.html; location / { if ( $scheme = https ){ rewrite ^ http://example.net$request_uri? permanent; } try_files $uri $uri/ /index.php?$uri&$args; index index.php index.html; } location ^~ /admin.php { if ( $scheme = http ) { rewrite ^ https://example.net$request_uri? permanent; } try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ... It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files. Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem. Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance! EDIT: Clearing the cache / cookies seemed to work. Is this the right way to do http/https redirection? I sort of made it up as I went along.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • YouTube SEO: Video Optimization

    - by Mike Stiles
    SEO optimization is still regarded as one of the primary tools in the digital marketing kit. However and wherever a potential customer is conducting a search, brands want their content to surface in the top results. Makes sense. But without a regular flow of good, relevant content, your SEO opportunities run shallow. We know from several studies video is one of the most engaging forms of content, so why not make sure that in addition to being cool, your videos are helping you win the SEO game? Keywords:-Decide what search phrases make the most sense for your video. Don’t dare use phrases that have nothing to do with the content. You’ll make people mad.-Research those keywords to see how competitive they are. Adjust them so there are still lots of people searching for it, but there are not as many links showing up for it.-Search your potential keywords and phrases to see what comes up. It’s amazing how many people forget to do that. Video Title: -Try to start and/or end with your keyword.-When you search on YouTube, visual action words tend to come up as suggested searches. So try to use action words. Video Description: -Lead with a link to your site (include http://). -Don’t stuff this with your keyword. It leads to bad writing and it won’t work anyway. This is where you convince people to watch, so write for humans. Use some showmanship. -At the end, do a call to action (subscribe, see the whole playlist, visit our social channels, etc.) Video Tags:-Don’t over-tag. 5-10 tags per video is plenty. -If you’re compelled to have more than 10, that means you should probably make more videos specifically targeting all those keywords. Find Linking Pals:-45% of videos are discovered on video sites. But 44% are found through links on blogs and sites.-Write a blog about your video’s content, then link to the video in it. -A good site for finding places to guest blog is myblogguest.com-Once you find good linking partners, they’ll link to your future videos (as long as they’re good and you’re returning the favor). Tap the Power of Similar Videos:-Use Video Reply to associate your video with other topic-related videos. That’s when you make a video responding to or referencing a video made by someone else. Content:-Again, build up a portfolio of videos, not just one that goes after 30 keywords.-Create shorter, sequential videos that pull them deeper into the content and closer to a desired final action.-Organize your video topics separately using Playlists. Playlists show up as a whole in search results like individual videos, so optimize playlists the same as you would for a video. Meta Data:-Too much importance is placed on it. It accounts for only 15% of search success.-YouTube reads Captions or Transcripts to determine what a video is about. If you’re not using them, you’re missing out.-You get the SEO benefit of captions and transcripts whether the viewers has them toggled on or not. Promotion:-This accounts for 25% of search success.-Promote the daylights out of your videos using your social channels and digital assets. Don’t assume it’s going to magically get discovered. -You can pay to promote your video. This could surface it on the YouTube home page, YouTube search results, YouTube related videos, and across the Google content network. Community:-Accounts for 10% of search success.-Make sure your YouTube home page is a fun place to spend time. Carefully pick your featured video, and make sure your Playlists are featured. -Participate in discussions so users will see you’re present. The volume of ratings/comments is as important as the number of views when it comes to where you surface on search. Video Sitemaps:-As with a web site, a video sitemap helps Google quickly index your video.-Google wants to know title, description, play page URL, the URL of the thumbnail image you want, and raw video file location.-Sitemaps are xml files you host or dynamically generate on your site. Once you’ve made your sitemap, sign in and submit it using Google webmaster tools. Just as with the broadcast and cable TV channels, putting a video out there is only step one. You also have to make sure everybody knows it’s there so the largest audience possible can see it. Here’s hoping you get great ratings. @mikestiles

    Read the article

  • Reviews Cheyney Group Marketing: What accounting softwares are available in the market for small businesses?

    - by user224313
    Accounting is the language of business, and good accounting software can save you hundreds of hours at the business equivalent of Berlitz. There's no substitute for an accounting pro who knows the ins and outs of tax law, but today's desktop packages can help you with everything from routine bookkeeping to payroll, taxes, and planning. Each package also produces files that you can hand off to an accountant as needed. Small-business managers have more accounting software options than ever, including subscription Web-based options that don't require their users to install or update software. Many businesses, however--including those that need to track large inventories or client databases, and those that prefer not to entrust their data to the cloud--may be happier with a desktop tool. We looked at three general-purpose, small-business accounting packages: Acclivity AccountEdgePro 2012 (both the product and the company were previously called MYOB), Intuit QuickBooks Premier 2012, and Sage's Sage 50 Complete 2013 (the successor to Peachtree Complete). All three packages offer a solid array of tools for tracking income and expenses, invoicing, managing payroll, and creating reports. These full-featured and highly mature programs don't come cheap. Acclivity AccountEdge Pro, at $299, is the least expensive; and prices climb if you opt to use common time-saving add-ons such as payroll services, or if you add licenses for multiple user accounts. All three are solid on the basics, but they have distinct differences in style and focus. The more you know about your accounting requirements, the more closely you'll want to look at the software you're thinking of buying. Sage 50 Complete should appeal most to people who understand the fine points of accounting and can use the product's many customization features (especially for businesses that manage inventory). QuickBooks works hard to appeal to newbies who need only the basics and might be intimidated by the level of detail and technical language exposed in the other two packages. At the same time, it also has a slew of third-party add-ons that meet specific needs and greatly expand its capabilities. AccountEdge Pro balances accessibility with a strong feature set at an affordable price. It's especially suitable for businesses that need to provide simultaneous access to multiple users.

    Read the article

  • Setting Up My Server to Do DNS On OpenSuse 11.3

    - by adaykin
    Hello, I am attempting to use my server to be a DNS server. I am having trouble getting the domain setup. Here is what I have so far: /var/lib/named/master/andydaykin.com: $TTL 2d @ IN SOA andydaykin.com. root.andydaykin.com. ( 2011011000 ; serial 0 ; refresh 0 ; retry 0 ; expiry 0 ) ; minimum andydaykin.com. IN NS ns1.andydaykin.com. andydaykin.com. IN SOA ns1.andydaykin.com. hostmaster.andydaykin.com. ( @.andydaykin.com. IN NS ns1.andydaykin.com. ns1.andydaykin.com. IN A 204.12.227.33 www.andydaykin.com. IN A 204.12.227.33 /etc/resolve.conf: search andydaykin.com nameserver 204.12.227.33 /etc/named.conf: options { # The directory statement defines the name server's working directory directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; statistics-file "/var/log/named.stats"; listen-on port 53 { 127.0.0.1; }; listen-on-v6 { any; }; notify no; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; include "/etc/named.d/forwarders.conf"; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; Include the meta include file generated by createNamedConfInclude. This includes all files as configured in NAMED_CONF_INCLUDE_FILES from /etc/sysconfig/named include "/etc/named.conf.include"; zone "andydaykin.com" in { file "master/andydaykin.com"; type master; allow-transfer { any; }; }; logging { category default { log_syslog; }; channel log_syslog { syslog; }; }; What I am doing wrong?

    Read the article

  • TrueCrypt partition will no longer mount

    - by sparkyuiop
    I am hoping for some advice to help me out of my situation, with luck. I have a computer running Windows 7 Ultimate x64 with 3 hard disks installed. On my 2TB hard disk 2 (non-system disk) I have 4 partitions. One is for music, another for video, a downloads partition and a 500GB RAW Truecrypt encrypted partition / volume that I had setup to mount with 4 photographs used as keyfiles. The 4 photographs are located in my 'Documents' partition which is one of four partitions on my 1.5TB hard disk 1 (non-system disk) When I setup the disk encryption I did not (I'm 99% sure) create a password, I only used the 4 photograph keyfiles to mount the volume. Recently my 1TB hard disk 0 (system / boot) started to fail so I decided to replace it. I was going to clone the old disk to a new disk but decided that a fresh installation would be more beneficial. Once I had transferred all the required 'User Data' from my old hard disk 0 (C: disk) I discarded it. I reinstalled Truecrypt, pointed to the partition, selected my 4 keyfiles photographs and I mounted my encrypted volume with no issues. In fact I mounted it several times after re-installing Windows and after reboots. Now all of a sudden when I try and mount it I get the message "incorrect keyfile(s) and/or password or not a Truecrypt volume". Now I am not sure why this happened as I do not recall exactly what I did between last mounting the volume successfully and it not mounting. Here are some of the possible things I may have done to cause it to stop working but I am at a loss as to where to start to try and resolve the problem. 1. I had swapped the drive letters to a preferred order. 2. I possibly swapped the physical SATA connectors on the mainboard. 3. I enabled 'Hot Plugging' for the two non-system hard disk SATA ports and the DVD SATA port in the BIOS. I have tried changing the encrypted partition drive letter as suggested in another post but this does not help. On my old system the encrypted drive was drive "X". I have about tried it with all the other free drive letters but alas nothing changes. I do not recall what drive letter was allocated to the encrypted partition before I changed them all. I have not tried to change the letter back to what it possibly was to start with as I am happy with the current layout. I will try this is anyone thinks it would be worthwhile though. I do hope I have managed to convey my situation in an understandable manner and live in hope someone could help me recover years of personal files. Thank you very much for taking the time to read my post and for any suggestions you may offer. Regards Phillip Thorne (UK) Anyone???

    Read the article

  • "pdf_open: Not a PDF 1.[1-5] file." when typesetting TeX file in TextMate

    - by Manti
    I have a TeX file, which is typeset by XeLaTeX. The file has \includegraphics{coverimage.eps} on the first page, and coverimage.eps is located in the same directory as the TeX file. OS: MacOS 10.6.5, TextMate 1.5.10, xelatex 3.1415926-2.2-0.9997.4 (TeX Live 2010) If I run xelatex doc.tex in console, file compiles without errors, and eps file is included. If I typeset it in TextMate (xelatex engine is selected in preferences), I get the following error: ** WARNING ** pdf_open: Not a PDF 1.[1-5] file. ** WARNING ** Failed to include image file "./coverimage.eps" ** WARNING ** >> Please check if ** WARNING ** >> rungs -q -dNOPAUSE -dBATCH -sPAPERSIZE=a0 -sDEVICE=pdfwrite -dCompatibilityLevel=%v -dAutoFilterGrayImages=false -dGrayImageFilter=/FlateEncode -dAutoFilterColorImages=false -dColorImageFilter=/FlateEncode -sOutputFile=%o %i -c quit ** WARNING ** >> %o = output filename, %i = input filename, %b = input filename without suffix ** WARNING ** >> can really convert "./coverimage.eps" to PDF format image. ** WARNING ** pdf: image inclusion failed for "coverimage.eps". ** WARNING ** Failed to read image file: coverimage.eps ** WARNING ** Interpreting special command PSfile (ps:) failed. ** WARNING ** >> at page="1" position="(107.149, 124.566)" (in PDF) ** WARNING ** >> xxx "PSfile="coverimage.eps" llx=0 lly=0 urx=408 ury=526 rwi=3809 and there is no image included in the resulting file (but text looks ok). I am not sure whether it is the error of xelatex or TextMate LaTeX bundle. What I tried to do: As I said, xelatex doc.tex from console works. The exact command line that TextMate uses is: xelatex -interaction=nonstopmode -file-line-error-style -synctex=1 But it works from console too. If I convert the image to pdf (and fix the .tex file accordingly), typesetting from TextMate works too. I tried running rungs with parameters specified in the error message, and got valid .pdf file with image as a result. I compared .log files from typesetting in TextMate and in console, they are absolutely identical except for this error message (in particular, version of xelatex is the same). Does anyone know what can cause this? Please tell me if you need any additional information. Thank you in advance.

    Read the article

  • unable to connect site to different port

    - by JohnMerlino
    I have a domain was registered at godaddy named http://mysite.com/. I logged into godaddy and I went to All Products Domains Domain Management. I clicked on the appropriate domain and it took me to the Domain Details page. I clicked Launch under DNS Manager and it took me to the Zone File Editor. I noticed that notify.mysite.com was pointing to an IP address pointing to a dead server, so I switched it to an operating server. Then I pinged the domain to see where it was pointing to and it was correctly pointing to the working server. So I copied the default configuration under sites-available: sudo cp default notify.mysite.com. And then I made some edits to it to have it point to a different document root to serve files at a different port: Listen 1740 Listen 64.135.xx.xxx:1740 #I also tried this as well: NameVirtualHost 64.135.xx.xxx:1740 <VirtualHost 64.135.xx.xxx:1740> ServerAdmin [email protected] ServerName notify.mysite.com DocumentRoot /var/www/test/public <Directory /var/www/test/public> Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Then I enabled the virtual host. Then I went to the document root and added an index.html file with some text in it. Then I restarted apache. The restart gave no errors. Then I type the correct domain in URL: http://notify.mysite.com:1740/ and I get: Oops! Google Chrome could not connect to notify.mysite.com:1740 Somehow it took out all my other sites. Now even the ones that were responding on port 80 are no longe responding, even though I did not touch the virtual hosts for them. I get this message now: Oops! Google Chrome could not connect to mysite.com However, ping responds: ping mysite.com PING mysite.com (64.135.12.134): 56 data bytes 64 bytes from 64.135.12.134: icmp_seq=0 ttl=49 time=20.839 ms 64 bytes from 64.135.12.134: icmp_seq=1 ttl=49 time=20.489 ms The result of telnet: $ telnet guarddoggps.com 80 Trying 64.135.12.134... telnet: connect to address 64.135.12.134: Connection refused telnet: Unable to connect to remote host

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

< Previous Page | 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563  | Next Page >