Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 481/1030 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • How to manage iowait over cifs?

    - by Silvia
    For backup purposes we have Cifs file Server running that contains encrypted containers for backing up the more sensitive data. The container is mounted with cryptsetup and loop as a local filesystem and the rsync is used for backups. Because the Cifs server is not the fastest machine ever built, running the rsync process results in an iowait on the servers running the backup which in turn drives Nagios into an email frenzy. The question is, how do reduce the iowait on the server? Configuring Nagios to not report seems more like a workaround then a solution. Stretching the backups over different time intervals is already done with little effect and spending money is also not an option because apparently, we are talking about a "non-critical system".

    Read the article

  • Ugly/Inconsistent Theming in Ubuntu Gnome 12.10

    - by Erland
    Some applications are displaying really ugly widgets and menus. I think it's a GTK issue and perhaps more particularly, only applies to GTK2 apps but I'm not sure. The numerous questions on here that deal with GTK2 v GTK3 themes do not answer my problem. Here is my situation: I'm using Ubuntu Gnome with Gnome Shell (installed using the "upgrade" instructions, rather than fresh install) with the default Adwaita theme The reason I did an upgrade instead of fresh install is because I'm on a Macbook Air and there is no mac image/iso for Ubuntu Gnome Previously, I did a fresh install of Ubuntu Gnome 12.10 and had no theming problems Now, apps like nautilus, rhythmbox, brasero, even third-party ones like Lightread look exactly as expected but other apps, including Firefox, Inkscape, GIMP, Libreoffice look awful. Some examples: Firefox with ugly location bar: http://ubuntuone.com/3e2X0JTa4CT4afC4303U9c vs nautilus location bar: http://ubuntuone.com/3TbHWWuNMcJnlpI4IpjiUO GIMP file dialogue (like Windows 95!): http://ubuntuone.com/4ioCcqq3flgO7zAWgAhfWy vs the rhythmbox file dialogue (correct): http://ubuntuone.com/2xLplCOBvQnyeqdsTGdgXq Menus in Libreoffice (very bad for usability): http://ubuntuone.com/26WTaEz4PMGmiItGeSmBjZ vs menus in rhythmbox: http://ubuntuone.com/4Ib4thMLqohsle6J5KEvuI I've been searching for a solution to this problem for some time. The logical explanation is that all the GTK3 apps are working and anything that is still using GTK2 is not. If that's the case though, why did the same installation (Ubuntu Gnome 12.10) and the same theme (Adwaita) previously work with all those GTK2 apps? Desperate for help!

    Read the article

  • Another installation is in progress

    - by Steven
    Why I try to install any program I see "Another installation is in progress. You must complete that installation before continuing this one." error. Googled the web and found that solution would be to delete HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\InProgress registry key and reboot. That didn't help me unfortunately. When I open "Services" mmc snapin it shows that "Windows Installer" service is "Started", but "Start/Stop/Pause/Restart" buttons are grayed (the interesting thing is that startup type = "Manual", so I don't really know how to explain that I already have 2 instances of msiexec.exe in memory and one instance is consuming 50Mb of memory. Looks like there's a serious issue with my installer service - is there any way to fix it (please do mind - I can't install anything!) Any help would be greatly appreciated.

    Read the article

  • Cannot Enter Credentials Over UAC Prompts During Remote Assistance

    - by user100731
    We are using sonicwall firewall device through out our network and we use the sonicwall virtual assistance tool for remote desktop assistance. Since our systems are not in workgroup and are on domain we face problem when the UAC prompts appear. As a work around we edited the UAC policies, such as switching to secure desktop-disable, Allowing UI Acess applications to prompt for elevation without using secure desktop-Enable etc. The ultimate result was we are able to see the UAC prompt on the remote user system but not able to interact with it like we are not able to enter credentials to it even I can see the password being entered if it is done by the local user. However, we cannot interact with UAC prompt window remotely. Is there any solution for this?

    Read the article

  • Simple monitoring utility with up/down statuses of the host's network connectivity and services

    - by Beaming Mel-Bin
    We've looked at many monitoring tools (SolarWinds, Zabbix, Nagios) through out the last 10 years but they never took hold because they are overly complicated. I am willing to try them again or something new at this point but with a much simpler goal: ping to check up and down of host tcp probes to test up and down of service notifications via e-mail web GUI prefer an OSS solution Wanted to know if someone has any recommendations on this. This could be a Windows or Linux application. Preferably without the reqirement of agents. I don't even need SNMP support but that may be nice for expanding once we have the above mentioned bare minimum in place.

    Read the article

  • Oracle Brings Analytics to Project Management

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Alison Weiss  Nonprofit and for-profit organizations have many differences, but there is one way they are alike—managers struggle with huge amounts of data generated every day. Project data by itself has limited use—but any organization that can gain insight to make accurate predictions or to use resources more effectively can gain an operational advantage. Oracle’s Primavera P6 Analytics 2.0 business intelligence solution enables organizations using Oracle’s Primavera P6 Professional Project Management to do just that: identify critical issues and uncover trends in stores of project data. Primavera P6 Analytics provides management with the ability to look at not only how a single effort is progressing, but also how the entire organization is doing from a project perspective. The latest release includes new features that make it even easier to gather and analyze critical information. For example, the addition of geocoding gives Primavera P6 Analytics users the ability to track resources geographically on longitude and latitude and use a map to get an overall view of how projects, programs, and activities are deployed. “A nonprofit with relief projects in Vietnam, for example, can drill down to the project and get a world view and a regional view,” says Yasser Mahmud, vice president of product strategy and industry marketing in Oracle’s Primavera Global Business Unit. “Then they can drill down further to show statistics; key performance indicators; and how that program, portfolio, or project work is actually getting done.” The addition of new mobile capabilities to Primavera P6 Analytics puts deep-dive analysis into project managers’ hands with compatibility with major tablet operating systems. Now, nonprofits or for-profits working in remote locations can provide real-time visibility into projects to alert management if issues are occurring that need to be addressed immediately. “Primavera P6 Analytics generates information that can help organizations improve their utilization and trim down overall operating costs,” says Mahmud. “But more importantly, it gives organizations improved visibility.”

    Read the article

  • Command line import of database using latin1 encoding

    - by chrisjlee
    I'm using a particular cloud hosting solution (one which i won't name) and they don't provide ssh access so i'm at a whim on how the database is dumped. I downloaded the dump which is packed into a tar.gz file. I discover that this file utilizes latin1 encoding. Which i don't get to specify the encoding for the host i'm using because i don't have SSH access or DB access. I try to import it via command line for my local development environment (mysql -uroot foodb < file.db) like i usually do with other databases but am having problems. Is it possible to import a database via command line by specifying which encoding (preferably latin1) before importing it? Or do i have to convert it to UTF8?

    Read the article

  • Proven and Scalable Comet Server

    - by demetriusnunes
    What is the most proven, scalable comet server solution out there that can handle up to 100.000 real-life connections per node using HTTP streaming (not long-poll)? It must be a free, preferably open-source project. We've already tried Meteor (Perl), with no success. Meteor was able to scale just up to 20.000 connections per node. We are looking right now at these options: APE (C++), Orbited (Python), Grizzly (Glassfish), Cometd (Jetty). Any big success stories with any of these?

    Read the article

  • Vim Stuck In Insert Mode

    - by Levi Hackwith
    I've been using Vim for several months now via my web host (they allow putty access). All of a sudden, the escape key has become unresponsive. I cannot exist insert or any other mode by simply hitting escape. I have to hit F1 which brings up the help in vim and kicks me into command mode. I'm most certain that my escape key on my keyboard is functioning fine since all of my windows shortcuts that use the escape key operate normally. I know this is a ridiculous question and I'm certain there's a lot more to look into regarding a solution. What I really need is a solid lead as to where to start looking. Things that might help: I'm using vim via putty I'm logging in using jailshell I'm not root

    Read the article

  • Stuck at enemy movement

    - by Syed
    I am making a TD game in unity, Initially I made all of my enemy movements frame rate dependent say: I had a grid point1 at -22.65 and other at -21.1, diagrammatically: (-22.65) _________(-21.1)_______(-21.1+1.55) ...... so the distance on x axis between two points is 1.55, divided it by 25 jumps with each enemy jump of 0.062 of each frame. On reaching on next point of grid the enemy find again its path. All went fine until I have requirement of FastForward and Pause feature. I used timeScale property of unity but it wont work as they are frame dependent. I also tried double speed of enemy on clicking fast forward button at any time, it has some issues that enemy jumps are now less and it fails to reach on next grid point. Could someone suggest me solution to my problem. Do I need to change the enemy movement code to make it frame independent ?? I need the enemy to reach on the grid specific point I also need later to slow down any one enemy's speed when tower fires on it. Thnx

    Read the article

  • SOA Community Newsletter November 2012

    - by JuergenKress
    Dear SOA partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. BPM is a key solution to this portfolio. To position BPM to your customers you can find many use case ideas in the paper BPM 11g Patterns and industry specific value propositions for Financial Services & Insurance & Retail. Many more Process Accelerators (11.1.1.6.2) have become available. It is an excellent demo and starting point for BPM projects. Our SOA Suite team published the most important OOW presentation at the OTN website. The Oracle SOA proactive support team is running a series of blog posts about SOA and JMS Introductory. To become an expert in SOA, Bob highlighted the latest list of SOA books. For OSB projects we recommend the EAIESB OSB poster. Thanks to all the experts who contributed and shared their SOA & BPM knowledge this month again. Please feel free to send us the link to your blog post via twitter @soacommunity: Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel Fault Handling Slides and Q&A by Vennester Installing Oracle Event Processing 11g by Antoney Reynolds Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile By Michelle Kimihira A brief note for customers running SOA Suite on AIX platforms By Christian ACM - Adaptive Case Management by Peter Paul BPM 11g - Dynamic Task Assignment with Multi-level Organization Units By Mark Foster Oracle Real User Experience Insight: Oracle's Approach to User Experience Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsNovember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • SOA Community Newsletter November 2012

    - by JuergenKress
    Dear SOA partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. BPM is a key solution to this portfolio. To position BPM to your customers you can find many use case ideas in the paper BPM 11g Patterns and industry specific value propositions for Financial Services & Insurance & Retail. Many more Process Accelerators (11.1.1.6.2) have become available. It is an excellent demo and starting point for BPM projects. Our SOA Suite team published the most important OOW presentation at the OTN website. The Oracle SOA proactive support team is running a series of blog posts about SOA and JMS Introductory. To become an expert in SOA, Bob highlighted the latest list of SOA books. For OSB projects we recommend the EAIESB OSB poster. Thanks to all the experts who contributed and shared their SOA & BPM knowledge this month again. Please feel free to send us the link to your blog post via twitter @soacommunity: Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel Fault Handling Slides and Q&A by Vennester Installing Oracle Event Processing 11g by Antoney Reynolds Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile By Michelle Kimihira A brief note for customers running SOA Suite on AIX platforms By Christian ACM - Adaptive Case Management by Peter Paul BPM 11g - Dynamic Task Assignment with Multi-level Organization Units By Mark Foster Oracle Real User Experience Insight: Oracle's Approach to User Experience Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsNovember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • PXE Boot PCLinuxOS ISO

    - by DBNotCooper
    I'm in the process of trying to convert some computers at my local school to be diskless browser stations. We've identified PCLinuxOS as the OS we'd like to use due to it's easy interface for creating custom ISO images (we need WINE and some custom apps installed also, as well as FireFox). I've been having problems figuring out how to get an ISO to boot via PXE. In our network, I only have access to TFTP and HTTP, so I cannot use NFS. The machines all have enough memory (4 gigs) that they could use a ram drive to hold the ISO image, if that helps. Currently I've been looking at GPXE with GRUB/MEMDISK, but I don't know if that's the right solution, or even where a good resource is for setting it up. Searching the web has proved fruitless, as most of the information is either NFS-specific or out of date. The other students and I would appreciate any help! :)

    Read the article

  • There is no two finger scroll option in my "Mouse and Touchpad" settings

    - by Ian
    I simply do not have the option for "two-finger scrolling" available in my "Mouse and Touchpad" settings. I have tried a lot of terminal commands that I have found in the forums with no success. Who has a solution that will enable two-finger scrolling? A little about me: Ubuntu 12.04.1 LTS \n \l Built-in Pointing Device Type: Mouse Interface: PS/2 Buttons: 2 ~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Synaptics TouchPad id=15 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Power Button id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? WebCam SC-13HDL10931N id=10 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=14 [slave keyboard (3)] Screenshot of system settings:

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • What are functional-programming ways of implementing Conway's Game of Life

    - by George Mauer
    I recently implemented for fun Conway's Game of Life in Javascript (actually coffeescript but same thing). Since javascript can be used as a functional language I was trying to stay to that end of the spectrum. I was not happy with my results. I am a fairly good OO programmer and my solution smacked of same-old-same-old. So long question short: what is the (pseudocode) functional style of doing it? Here is Pseudocode for my attempt: class Node update: (board) -> get number_of_alive_neighbors from board get this_is_alive from board if this_is_alive and number_of_alive_neighbors < 2 then die if this_is_alive and number_of_alive_neighbors > 3 then die if not this_is_alive and number_of_alive_neighbors == 3 then alive class NodeLocations at: (x, y) -> return node value at x,y of: (node) -> return x,y of node class Board getNeighbors: (node) -> use node_locations to check 8 neighbors around node and return count nodes = for 1..100 new Node state = new NodeState(nodes) locations = new NodeLocations(nodes) board = new Board(locations, state) executeRound: state = clone state accumulated_changes = for n in nodes n.update(board) apply accumulated_changes to state board = new Board(locations, state)

    Read the article

  • Folder Size Column on Explorer on Windows Vista/Seven

    - by Click Ok
    I'm a big fan of FolderSize, but unfortunately it works only on Windows XP. Even reading this and this, I'm not convinced that I cannot to have a column showing the folder size on Windows Explorer. Even with all "problems" FolderSize worked like a charm in WindowsXP. In a sysadmin life, FolderSize is explendid. Before select a lot of folders to send to backup in DVDs, I can check directly in Windows Explorer the size of the folders and get a set of folders with 4.3Gb to burn in a DVD. In another situation, I can view in the root folder the size of the bigger folders in the hard drive and start a good strategy of backup/partitioning/transfer to another drive/etc. If desired, I can tell a lot of another needs that in my sysadmin life I need a tool like FolderSize... There is someone that is actively developing a solution to show folder size on Windows Explorer in Vista/Seven Windows? What the problems that I can face if I develop myself that "add-in" for Windows Explorer?

    Read the article

  • How to stop receiving an error reading iCal invitations on Mac sent from Thunderbird?

    - by Matthew Rankin
    When I open an iCal invitation (Mail Attachment.ics file) in Mac Mail.app sent to me from someone using Thunderbird, I receive the following error: The only solution that I've found to date is to save the Mail Attachment.ics file somewhere like the Desktop, open it in a text editor, and delete the blank lines at the beginning and end of the file. This problem appears to only happen when I'm receiving invitations from people using Thunderbird under Windows. Does anyone know of a better way to solve this problem? Is this a bug in Apple iCal, or is Thunderbird not meeting the iCal spec? Configuration Information OS: Mac OS X 10.6.2 Mail Program: Mac Mail.app ver. 4.2 (1077) Calendar Program: Mac iCal.app ver 4.0.1 (1374)

    Read the article

  • Does SpinRite do what it claims to do?

    - by romandas
    I don't have any real (i.e. professional) experience with Steve Gibson's SpinRite so I'd like to put this to the SF community. Does SpinRite actually do what it claims? Is it a good product to use? With a proper backup solution and RAID fault tolerance, I've never found need for it, but I'm curious. There seems to be some conflicting messages regarding it, and no hard data to be found either way. On one hand, I've heard many home users claim it helped them, but I've heard home users say a lot of things -- most of the time they don't have the knowledge or experience to accurately describe what really happened. On the other hand, Steve's own description and documentation don't give me a warm fuzzy about it either. So what is the truth of the matter? Would you use it?

    Read the article

  • How to unblock chm files - unblock button is missing

    - by Yoav
    I downloaded a CHM file. When I double click it it prompts me to open / save / cancel. Whether I open or save a new copy, the 'new' version will prompt the same open / save / cancel popup ad infinitum. Searching google it seems that Microsoft have deemed it right for security reason to block these files by default. The solution is to right click the file, and click the 'unblock' button at the bottom: The problem is that I don't have that button on my system: BTW, the button is also missing for .exe files.I'm using Win7 64bit. Any ideas?

    Read the article

  • Github Project Wiki: separate page filename from page title

    - by Marko Apfel
    Starting Point In a former version of a project wiki on Github was a separation between the page filename and page title. So we build up our Wiki in a manner, that some well defined prefixes in the filename describe the overall context of the particular page. Sample: At the page themselves we used a title-tag on top of the page to get the title in the rendered HTML-page. Here “= Tabelle: Anhänge bzw. Attachments” for the page with filename “data+table+Attachment”. This was rendered as We see: there is a file “data+table+Attachment” and a title-tag “= Tabelle: Anhänge bzw. Attachments” as well as a rendering with the title “Tabelle: Anhänge bzw. Attachments”. This was fine. Problem Now the Github-Wiki uses the title of the page as the filename and vice versa. This ends up in a cluttered file system and also in suppressing titles in the page themselves. So this page renders to As we could see: the title tag “= Organisation: IT-Infrastruktur” is not more rendered. Instead the filename “organisation+IT Infrastructure” is choosen as the title for the page. That sucks. Solution I reported this by Github again and hope for a fix.

    Read the article

  • MSSQL Auditing Recomendations

    - by Josh Anderson
    As an aspiring DBA, I have recently been asssigned the task of implementing the tracking of all data changes in the database for a peice of software we are developing. After playing with microsoft's change data capture methods, Im looking into some other solutions. We are planing to distribute our product as a hosted solution and unlimited installations would be desired for maximum scalability. Ive looked at IBM's Guardium as well as DB Audit by SoftTree. Im curious if anyone has any solutions they may have used in the past or possibly any suggestions or methods to achieve complete, and of course cost effective, auditing of data changes.

    Read the article

  • Why unhandled exceptions are useful

    - by Simon Cooper
    It’s the bane of most programmers’ lives – an unhandled exception causes your application or webapp to crash, an ugly dialog gets displayed to the user, and they come complaining to you. Then, somehow, you need to figure out what went wrong. Hopefully, you’ve got a log file, or some other way of reporting unhandled exceptions (obligatory employer plug: SmartAssembly reports an application’s unhandled exceptions straight to you, along with the entire state of the stack and variables at that point). If not, you have to try and replicate it yourself, or do some psychic debugging to try and figure out what’s wrong. However, it’s good that the program crashed. Or, more precisely, it is correct behaviour. An unhandled exception in your application means that, somewhere in your code, there is an assumption that you made that is actually invalid. Coding assumptions Let me explain a bit more. Every method, every line of code you write, depends on implicit assumptions that you have made. Take this following simple method, that copies a collection to an array and includes an item if it isn’t in the collection already, using a supplied IEqualityComparer: public static T[] ToArrayWithItem( ICollection<T> coll, T obj, IEqualityComparer<T> comparer) { // check if the object is in collection already // using the supplied comparer foreach (var item in coll) { if (comparer.Equals(item, obj)) { // it's in the collection already // simply copy the collection to an array // and return it T[] array = new T[coll.Count]; coll.CopyTo(array, 0); return array; } } // not in the collection // copy coll to an array, and add obj to it // then return it T[] array = new T[coll.Count+1]; coll.CopyTo(array, 0); array[array.Length-1] = obj; return array; } What’s all the assumptions made by this fairly simple bit of code? coll is never null comparer is never null coll.CopyTo(array, 0) will copy all the items in the collection into the array, in the order defined for the collection, starting at the first item in the array. The enumerator for coll returns all the items in the collection, in the order defined for the collection comparer.Equals returns true if the items are equal (for whatever definition of ‘equal’ the comparer uses), false otherwise comparer.Equals, coll.CopyTo, and the coll enumerator will never throw an exception or hang for any possible input and any possible values of T coll will have less than 4 billion items in it (this is a built-in limit of the CLR) array won’t be more than 2GB, both on 32 and 64-bit systems, for any possible values of T (again, a limit of the CLR) There are no threads that will modify coll while this method is running and, more esoterically: The C# compiler will compile this code to IL according to the C# specification The CLR and JIT compiler will produce machine code to execute the IL on the user’s computer The computer will execute the machine code correctly That’s a lot of assumptions. Now, it could be that all these assumptions are valid for the situations this method is called. But if this does crash out with an exception, or crash later on, then that shows one of the assumptions has been invalidated somehow. An unhandled exception shows that your code is running in a situation which you did not anticipate, and there is something about how your code runs that you do not understand. Debugging the problem is the process of learning more about the new situation and how your code interacts with it. When you understand the problem, the solution is (usually) obvious. The solution may be a one-line fix, the rewrite of a method or class, or a large-scale refactoring of the codebase, but whatever it is, the fix for the crash will incorporate the new information you’ve gained about your own code, along with the modified assumptions. When code is running with an assumption or invariant it depended on broken, then the result is ‘undefined behaviour’. Anything can happen, up to and including formatting the entire disk or making the user’s computer sentient and start doing a good impression of Skynet. You might think that those can’t happen, but at Halting problem levels of generality, as soon as an assumption the code depended on is broken, the program can do anything. That is why it’s important to fail-fast and stop the program as soon as an invariant is broken, to minimise the damage that is done. What does this mean in practice? To start with, document and check your assumptions. As with most things, there is a level of judgement required. How you check and document your assumptions depends on how the code is used (that’s some more assumptions you’ve made), how likely it is a method will be passed invalid arguments or called in an invalid state, how likely it is the assumptions will be broken, how expensive it is to check the assumptions, and how bad things are likely to get if the assumptions are broken. Now, some assumptions you can assume unless proven otherwise. You can safely assume the C# compiler, CLR, and computer all run the method correctly, unless you have evidence of a compiler, CLR or processor bug. You can also assume that interface implementations work the way you expect them to; implementing an interface is more than simply declaring methods with certain signatures in your type. The behaviour of those methods, and how they work, is part of the interface contract as well. For example, for members of a public API, it is very important to document your assumptions and check your state before running the bulk of the method, throwing ArgumentException, ArgumentNullException, InvalidOperationException, or another exception type as appropriate if the input or state is wrong. For internal and private methods, it is less important. If a private method expects collection items in a certain order, then you don’t necessarily need to explicitly check it in code, but you can add comments or documentation specifying what state you expect the collection to be in at a certain point. That way, anyone debugging your code can immediately see what’s wrong if this does ever become an issue. You can also use DEBUG preprocessor blocks and Debug.Assert to document and check your assumptions without incurring a performance hit in release builds. On my coding soapbox… A few pet peeves of mine around assumptions. Firstly, catch-all try blocks: try { ... } catch { } A catch-all hides exceptions generated by broken assumptions, and lets the program carry on in an unknown state. Later, an exception is likely to be generated due to further broken assumptions due to the unknown state, causing difficulties when debugging as the catch-all has hidden the original problem. It’s much better to let the program crash straight away, so you know where the problem is. You should only use a catch-all if you are sure that any exception generated in the try block is safe to ignore. That’s a pretty big ask! Secondly, using as when you should be casting. Doing this: (obj as IFoo).Method(); or this: IFoo foo = obj as IFoo; ... foo.Method(); when you should be doing this: ((IFoo)obj).Method(); or this: IFoo foo = (IFoo)obj; ... foo.Method(); There’s an assumption here that obj will always implement IFoo. If it doesn’t, then by using as instead of a cast you’ve turned an obvious InvalidCastException at the point of the cast that will probably tell you what type obj actually is, into a non-obvious NullReferenceException at some later point that gives you no information at all. If you believe obj is always an IFoo, then say so in code! Let it fail-fast if not, then it’s far easier to figure out what’s wrong. Thirdly, document your assumptions. If an algorithm depends on a non-trivial relationship between several objects or variables, then say so. A single-line comment will do. Don’t leave it up to whoever’s debugging your code after you to figure it out. Conclusion It’s better to crash out and fail-fast when an assumption is broken. If it doesn’t, then there’s likely to be further crashes along the way that hide the original problem. Or, even worse, your program will be running in an undefined state, where anything can happen. Unhandled exceptions aren’t good per-se, but they give you some very useful information about your code that you didn’t know before. And that can only be a good thing.

    Read the article

  • Transaction classification. Artificial intelligence

    - by Alex
    For a project, I have to classify a list of banking transactions based on their description. Supose I have 2 categories: health and entertainment. Initially, the transactions will have basic information: date and time, ammount and a description given by the user. For example: Transaction 1: 09/17/2012 12:23:02 pm - 45.32$ - "medicine payments" Transaction 2: 09/18/2012 1:56:54 pm - 8.99$ - "movie ticket" Transaction 3: 09/18/2012 7:46:37 pm - 299.45$ - "dentist appointment" Transaction 4: 09/19/2012 6:50:17 am - 45.32$ - "videogame shopping" The idea is to use that description to classify the transaction. 1 and 3 would go to "health" category while 2 and 4 would go to "entertainment". I want to use the google prediction API to do this. In reality, I have 7 different categories, and for each one, a lot of key words related to that category. I would use some for training and some for testing. Is this even possible? I mean, to determine the category given a few words? Plus, the number of words is not necesarally the same on every transaction. Thanks for any help or guidance! Very appreciated Possible solution: https://developers.google.com/prediction/docs/hello_world?hl=es#theproblem

    Read the article

  • Spirent Communications Improves Customer Experience with Knowledge Management

    - by Tony Berk
    Spirent Communications plc is a global leader in test and measurement inspiring innovation within development labs, communication networks and IT organizations. The world’s leading communications companies rely on Spirent to help design, develop, validate, and deliver world-class network, devices, and services. Spirent’s customers require high levels of support for a diverse and complex product portfolio, and the company is committed to delivering on this requirement. Spirent needed a solution to help its customers get the information they need quickly and at their convenience through its Web site. After evaluating several solutions, Spirent selected and deployed Oracle Knowledge for Web Self Service Enterprise Edition. Oracle Knowledge Management uses natural language processing to understand the true intent of each inquiry logged via the support portal’s search function. The Spirent Knowledge Base on the company’s Customer Support Center (CSC) finds the best possible answer using search enhancement features?such as communications industry-specific libraries and federation to search external sources. Spirent has reduced contact center call volume while better serving its customers. Each time a customer uses the knowledge base, they find answers faster than by calling, and it saves Spirent an average of US$210 per call?which is significant when multiplied across the thousands of calls received monthly. Oracle Knowledge also helps support engineers find answers more quickly, enabling the company to scale without adding additional support engineers. Oracle Knowledge is integrated with Spirent's Siebel Contact Center implementation to provide an integrated desktop for CRM and agent intelligence, avoiding the need for contact center personnel to toggle between various screens to address customer inquiries, thereby accelerating customer service. Click here to learn more about Sprient's use of Siebel CRM and Oracle Knowledge Management.

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >