Search Results

Search found 13457 results on 539 pages for 'superuser interface'.

Page 210/539 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • What alternatives are there for asp.net forms authentication?

    - by Eytan Levit
    Hi, We are developing a web app that will have a pretty complex user and permission system. The general idea is that we have 3 levels of security: a simple user - that can only access basic data that is in a data repository a manager - that can open up data repositories a superuser - that can open up repository factories. each repository contains various data types(text, images, etc etc). We are looking for authentication methods that will allow us: 1. Scalability. 2. Customization. 3. To create permissions that will effect the GUI + deny access to certain pages. 4. To create predefined roles - that will allow for easy setup of new users. 5. To create custom roles for specific users - allowing them permission sets that are different from the predefined roles. Thanks in advance

    Read the article

  • What is branched in a repository?

    - by Peter M
    Ok I hope that this will end up sounding like a reasonable question. From what I understand of subversion if you have a repo that contains multiple projects, then you can branch individual projects within that repo (see SVN Red book - Using Branches) However what I don't quite follow is what happens when you create a branch in one of the distributed systems (Git, Hg, Bazaar - I don't think it matters which one). Can you branch just a sub-directory of the repo, or when you create the branch are you branching the entire repo? This question is part of a larger one that I posted on superuser (choice and setup of version control) and has come about as I am trying to figure out how to best version control a large hierarchal layout of independent projects. It may be that for distributed systems that what I would like to do is best handled by a sub-project mechanism of some sort - but again that is something I am not clear on although I have heard the term mentioned in regards to git.

    Read the article

  • Django remove list_editable on a per row basis.

    - by Jason Leveille
    On the back of Django 1.2RC1, I have built a great administrator for my client! It's awesome. The client has one request that, if not satisfied, could bring this house of cards crashing down. I have a list of swim meet results. A meet result is added by a superuser. A team rep can edit a meet result (which they must be able to do inline, with list_editable), but after they edit the meet result inline and save, they should no longer be able to edit that result inline. They can only perform one edit. So, my question ... can I turn off list_editable on a per row basis?

    Read the article

  • Is it easier to write filesystem drivers in userspace than in kernel space?

    - by Jack
    I will use the Linux NTFS driver as an example. The Linux kernel NTFS driver only has very limited write support in the kernel, and after 5 years it is still considered experimental. The same development team creates the ntfsmount userspace driver, which has almost perfect write support. Likewise, the NTFS-3G project which is written by a different team also has almost perfect write support. Why has the kernel drive taken so much longer? Is it much harder to develop for? Saying that there already exists a decent userspace application is not a reason why the kernel driver is not compelte. NOTE: Do not migrate this to superuser.com. I want a programing heavy answer, from a programming perspective, not a practical use answer. If the question is not appropriate for SO, please advise me as to why so I can edit it so it is.

    Read the article

  • using sudo with mercurial and ssh authentication

    - by Shawn
    How do i run ssh-add key sudo hg clone [email protected]/etc/etc but use my ssh keys and not the superusers. Hey everyone, when i use sudo with for example, sudo hg clone [email protected]/etc/etc after i have added a key to my user account it doesnt work. I remember this is because the sudo is ran as the superuser but that user cannot have keys added to it. I remember setting some directive (im using debian) that allowed me to run that command as sudo, but still have my ssh keys taken from my normal user account but i didnt make a note of it at the time. Thanks.

    Read the article

  • Creating an Application to Save Arbitrary Application State

    - by ashes999
    See this SuperUser question. To summarize, VM software lets you save state of arbitrary applications (by saving the whole VM image). Would it be possible to write some software for Windows that allows you to save and reload arbitrary application state? If so (and presumably so), what would it entail? I would be looking to implement this, if possible, in a high-level language like C#. I presume if I used something else, I would need to dump memory registers (or maybe dump the entire application memory block) to a file somewhere and load it back somewhere to refresh state. So how do I build this thing?

    Read the article

  • How to access java collection, just like the table in the database, having indexes and LINQ-like que

    - by Shaman
    This task occurs from time to time in my projects. I need to handle a collection of some complex elements, having different attributes, such as login, password_hash, role, etc. And, I need to be able to query that collection, just like I query a table in the database, having only partial data. For example: get all users, with role "user". Or check, if there's a user with login "root" and role "superuser". Removing items, based on same data is also needed. The first attempt I've tried is to use Google collections, Apache collections, and lambdaj. All of them have very similar preicate mechanism, but with a great disadvantage: it is based on iteration, one by one, over the collection of items, which is not good, for often used collections, containing big amounts of data. Could you please suggest me some solution? Thanks.

    Read the article

  • Will Windows Update modify anything in Visual Studio?

    - by Martin
    (Note: Yes, the technical side of this question seems to be rather SuperUser, but the implications are more relevant for StackOverflow readers.) As the title says, we are wondering if (fully) enabling automated Windows Updates on our developer machines will have implications for MS Visual Studio. That is, will any fixes to any components (be it libraries, UI/IDE, compiler, ...) ever be updated through Windows Update? We want to have 100% exact and reproducible development environments (wrt C++) on all developer machines, and so we are concerned that automated Windows updates may introduce some uncontrolled updates into our development chain.

    Read the article

  • How to use all the cores in Windows 7?

    - by Anon
    I am not sure if this belongs to Stackoverflow or Superuser but I thought I would ask here. I have a console based application written in C which currently takes about an hour to terminate in Windows 7 64-bit OS. The task manager reports that the application is using only 25% of the available CPU. I would like to reduce the run time by increasing cpu usage. Is there any way to let the application use all four cores (the laptop has Core i5) instead of just one? I am assuming that task manager reports 25% because only one core is allocated to the program.

    Read the article

  • Can I install Whizzy for Emacs on a Mac (is Mac OS X a unix environment)?

    - by Vivi
    I think my question is pretty stupid, but here it goes: I am using Aquamacs, and I want to install the Whizzy mode. The website for Whizzy says that "it is designed for Unix platforms". I read that Mac OS X is unix certified, but does that mean I can install Whizzy on my mac? If yes, can I install and use it with Aquamacs or do I have to use the Emacs running from the terminal? PS: I don't know whether this question should be posted here or on SuperUser, but as Emacs users seem to hang out here more often, this is the place I chose.

    Read the article

  • Running ARM assembly code in Android

    - by Robert Joseph Dacunto
    I've been following the guide posted here, trying to get this Hello, World program to run on my Samsung Galaxy S3. It's rooted already, and I successfully pushed the "hello" file onto the sdcard. Now when I enter the shell as the superuser (# instead of $), and try to run the file, I get "cannot execute - permission denied". I used chmod 755 hello to see if that would fix it, still nothing. Is there something I'm missing? This is my first time fiddling around with Android, just got the phone, and wanted to see if I could get this to work. Very new to it all. Thanks!

    Read the article

  • Enclosing multiple LaTeX documents

    - by InfiniteInt
    I am not sure if this question is suitable for superuser instead, but I'll ask anyway. Currently I'm writing on my documentation for my final project. I must append several other LaTeX documents as enclosure. They must be mentioned in the first Table of Contents as alphabetical appendix. I tried the package subfiles, but the subsequent TOC´s are always empty exept in case of translating them alone. Is there any other way to append multiple full documents with there own documentclass??

    Read the article

  • Rewrite Registry File in Windows

    - by Vulcan Eager
    I have been trying to find a way to "defragment" the registry on my Windows machine. Firstly, does this make sense? Any benefits in doing this? (Not much love on superuser.com) Secondly, I am looking for a way to rewrite the registry using C/C++ with Windows API. Is there a way to read the registry and write it to a new file getting rid of unused bytes along the way? (I might have to write the new file and then boot into another OS/disk before I can overwrite the original... but I am willing to take that risk.)

    Read the article

  • Switch role after connecting to database

    - by Chris Gow
    Is it possible to change the postgresql role a user is using when interacting with postgres after the initial connection? The database(s) will be used in a web application and I'd like to employ database level rules on tables and schemas with connection pooling. From reading the postgresql documentation it appears I can switch roles if I originally connect as a user with the superuser role, but I would prefer to initially connect as a user with minimal permissions and switch as necessary. Having to specify the user's password when switching would be fine (in fact I'd prefer it). What am I missing?

    Read the article

  • How to restart Linux from inside a C++ program?

    - by Dave K
    I have a Qt 4 GUI where I need to have a option in a drop-down menu that allows the user to choose to restart the computer. I realize this might seem redunant with the ability to restart the computer in other ways, but the choice needs to stay there. I've tried using system() to call the following: a suid-root shell script a non-suid shell script a suid-root binary program and all of them just cause reboot: must be superuser to be printed. Using system() to call reboot directly does the same thing. I'm not especially attached to using system() to do this, but it seemed like the most direct choice. How can I reboot the system from the GUI?

    Read the article

  • setfsuid() and python 2.5.4

    - by user331398
    Hi, I'm trying to use setfsuid() with python 2.5.4 and RHEL 5.4. Since it's not included in the os module, I wrapped it in a C module of my own and installed it as a python extension module using distutils. However when I try to use it I don't get the expected result. setfsuid() returns value indicating success (changing from a superuser), but I can't access files to which only the newly set user should have user access (using open()), indicating that fsuid was not truely changed. I tried to verify setfsuid() worked, by running it consecutively twice with the same user input The result was as if nothing had changed, and on every call the returned value was of old user id different from the new one. I also called getpid() from the module, and from the python script, both returned the same id. so this is not the problem. Just in case it's significant, I should note that I'm doing all of this from within an Apache daemon process (WSGI). Anyone can provide an explanation to that? Thank you

    Read the article

  • 'pip install carbon' looks like it works, but pip disagrees afterward

    - by fennec
    I'm trying to use pip to install the package carbon, a package related to statistics collection. When I run pip install carbon, it looks like everything works. However, pip is unconvinced that the package is actually installed. (This ultimately causes trouble because I'm using Puppet, and have a rule to install carbon using pip, and when puppet asks pip "is this package installed?" it says "no" and it reinstalls it again.) How do I figure out what's preventing pip from recognizing the success of this installation? Here is the output of the regular install: root@statsd:/opt/graphite# pip install carbon Downloading/unpacking carbon Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 Successfully installed carbon Cleaning up... root@statsd:/opt/graphite# pip freeze | grep carbon root@statsd: Here is the verbose version of the install: root@statsd:/opt/graphite# pip install carbon -v Downloading/unpacking carbon Using version 0.9.9 (newest of versions: 0.9.9, 0.9.9, 0.9.8, 0.9.7, 0.9.6, 0.9.5) Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon running egg_info creating pip-egg-info/carbon.egg-info writing requirements to pip-egg-info/carbon.egg-info/requires.txt writing pip-egg-info/carbon.egg-info/PKG-INFO writing top-level names to pip-egg-info/carbon.egg-info/top_level.txt writing dependency_links to pip-egg-info/carbon.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) reading manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_publisher.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/manhole.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/instrumentation.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/cache.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/management.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/relayrules.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/events.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/protocols.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/conf.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/rewrite.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/hashing.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/writer.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/client.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/util.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/service.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_listener.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/routers.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/storage.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/log.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/__init__.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/state.py -> build/lib.linux-i686-2.7/carbon creating build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/receiver.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/rules.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/buffers.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/__init__.py -> build/lib.linux-i686-2.7/carbon/aggregator package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) creating build/lib.linux-i686-2.7/twisted creating build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_relay_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_aggregator_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_cache_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/carbon/amqp0-8.xml -> build/lib.linux-i686-2.7/carbon running build_scripts creating build/scripts-2.7 copying and adjusting bin/validate-storage-schemas.py -> build/scripts-2.7 copying and adjusting bin/carbon-aggregator.py -> build/scripts-2.7 copying and adjusting bin/carbon-cache.py -> build/scripts-2.7 copying and adjusting bin/carbon-relay.py -> build/scripts-2.7 copying and adjusting bin/carbon-client.py -> build/scripts-2.7 changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 running install_lib copying build/lib.linux-i686-2.7/carbon/amqp_publisher.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/manhole.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp0-8.xml -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/instrumentation.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/cache.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/management.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/relayrules.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/events.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/protocols.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/conf.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/rewrite.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/hashing.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/writer.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/client.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/util.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/aggregator/receiver.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/rules.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/buffers.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/__init__.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/service.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp_listener.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/routers.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/storage.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/log.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/__init__.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/state.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/twisted/plugins/carbon_relay_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_aggregator_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_cache_plugin.py -> /opt/graphite/lib/twisted/plugins byte-compiling /opt/graphite/lib/carbon/amqp_publisher.py to amqp_publisher.pyc byte-compiling /opt/graphite/lib/carbon/manhole.py to manhole.pyc byte-compiling /opt/graphite/lib/carbon/instrumentation.py to instrumentation.pyc byte-compiling /opt/graphite/lib/carbon/cache.py to cache.pyc byte-compiling /opt/graphite/lib/carbon/management.py to management.pyc byte-compiling /opt/graphite/lib/carbon/relayrules.py to relayrules.pyc byte-compiling /opt/graphite/lib/carbon/events.py to events.pyc byte-compiling /opt/graphite/lib/carbon/protocols.py to protocols.pyc byte-compiling /opt/graphite/lib/carbon/conf.py to conf.pyc byte-compiling /opt/graphite/lib/carbon/rewrite.py to rewrite.pyc byte-compiling /opt/graphite/lib/carbon/hashing.py to hashing.pyc byte-compiling /opt/graphite/lib/carbon/writer.py to writer.pyc byte-compiling /opt/graphite/lib/carbon/client.py to client.pyc byte-compiling /opt/graphite/lib/carbon/util.py to util.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/receiver.py to receiver.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/rules.py to rules.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/buffers.py to buffers.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/service.py to service.pyc byte-compiling /opt/graphite/lib/carbon/amqp_listener.py to amqp_listener.pyc byte-compiling /opt/graphite/lib/carbon/routers.py to routers.pyc byte-compiling /opt/graphite/lib/carbon/storage.py to storage.pyc byte-compiling /opt/graphite/lib/carbon/log.py to log.pyc byte-compiling /opt/graphite/lib/carbon/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/state.py to state.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_relay_plugin.py to carbon_relay_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_aggregator_plugin.py to carbon_aggregator_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_cache_plugin.py to carbon_cache_plugin.pyc running install_data copying conf/storage-schemas.conf.example -> /opt/graphite/conf copying conf/rewrite-rules.conf.example -> /opt/graphite/conf copying conf/relay-rules.conf.example -> /opt/graphite/conf copying conf/carbon.amqp.conf.example -> /opt/graphite/conf copying conf/aggregation-rules.conf.example -> /opt/graphite/conf copying conf/carbon.conf.example -> /opt/graphite/conf running install_egg_info running egg_info creating lib/carbon.egg-info writing requirements to lib/carbon.egg-info/requires.txt writing lib/carbon.egg-info/PKG-INFO writing top-level names to lib/carbon.egg-info/top_level.txt writing dependency_links to lib/carbon.egg-info/dependency_links.txt writing manifest file 'lib/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found reading manifest file 'lib/carbon.egg-info/SOURCES.txt' writing manifest file 'lib/carbon.egg-info/SOURCES.txt' removing '/opt/graphite/lib/carbon-0.9.9-py2.7.egg-info' (and everything under it) Copying lib/carbon.egg-info to /opt/graphite/lib/carbon-0.9.9-py2.7.egg-info running install_scripts copying build/scripts-2.7/validate-storage-schemas.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-aggregator.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-cache.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-relay.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-client.py -> /opt/graphite/bin changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 writing list of installed files to '/tmp/pip-9LuJTF-record/install-record.txt' Successfully installed carbon Cleaning up... Removing temporary dir /opt/graphite/build... root@statsd:/opt/graphite# For reference, this is pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)

    Read the article

  • Coexistence of projects between Visual Studio 2010 and 2012

    - by sreejukg
    Microsoft has released another version of Visual Studio named Visual Studio 2012. As you can see there are user interface (UI) changes in all/most of the Microsoft applications as Microsoft is moving towards Windows 8 and changing the UI scheme for all of the applications. Visual Studio 2012 is a move to adapt the new interface requirements that are in coherent with Windows 8. Not only this Visual Studio 2012 has lots of improvements in several areas and it supports .Net framework 4.5. In the past, whenever a new version of Visual Studio launches, developers needed to upgrade the project to new version of Visual Studio which was a pain, especially when you are working with a team of developers. Once a solution is upgraded to a newer version, it was not possible to going back. With Visual studio 2012, you can avoid the pain of upgrading. Developers will be able to open their project in Visual Studio 2012 along with Visual Studio 2010 SP 1. This means if you create a project using Visual Studio 2012, you will be able to open it with Visual Studio 2010 SP 1 and vice versa. There are some exceptions (as always!). Visual Studio 2012 supports some new project types, which was not there in 2010 version. Such project, you will not be able to open in Visual Studio 2010. For e.g. Visual Studio 2012 brings a new project type named “Windows 8 Modern Applications”, such projects you will not be able to open using the 2010 version of Visual Studio. Just to prove the said subject, I am going to perform some simple operations. I installed Visual Studio 2010 with SP 1 and Visual Studio 2012 on my PC. See the snapshots for both the installations. Visual Studio 2010 Visual Studio 2012 Now I am going to perform two test cases. First create a project in 2010 Version and open it in 2012 version and vice versa. If you are interested, you can continue scrolling down, otherwise just say bye bye to this article. Case 1: Open a solution created using Visual Studio 2010 in 2012 version. I created a project in VS 2010 named TestProject2010 using empty ASP.Net web application template. Once created the project appears in VS 2010 as follows. I closed Visual Studio and opened the solution file using VS 2012 by using the Open Project dialog(File -> Open Project/Solution). Surprisingly, there is not even a warning message, just the project opened fine in Visual Studio 2012. Case 2: Open a solution created using Visual Studio 2012 in 2010 version. I have created a project in Visual Studio 2012 named testProject2012. See the screenshot of the project in VS 2012 below. Now try opening the solution in Visual Studio 2010. The solution loaded successfully, but Visual Studio failed to load project. See the screenshot. At first I was surprised. The Web application project template is available in both versions, So there should not be any problem. What is making the incompatibility? Is it ASP.Net version? Yes it is. VS 2012 assign ASP.Net 4.5 as the default version that was causing the trouble for Visual Studio 2010. I changed the version to .Net framework 4.0 and saved the project after that I was able to open the project in Visual Studio 2010. This as an excellent move from Visual Studio Team and allows enterprises to perform gradual upgrade to the new version. Now developers can work in any version based on availability and preference, simply I can use Visual Studio 2012 as my IDE while my colleague working on the same project can still use Visual Studio 2010.

    Read the article

  • ASP.NET JavaScript Routing for ASP.NET MVC–Constraints

    - by zowens
    If you haven’t had a look at my previous post about ASP.NET routing, go ahead and check it out before you read this post: http://weblogs.asp.net/zowens/archive/2010/12/20/asp-net-mvc-javascript-routing.aspx And the code is here: https://github.com/zowens/ASP.NET-MVC-JavaScript-Routing   Anyways, this post is about routing constraints. A routing constraint is essentially a way for the routing engine to filter out route patterns based on the day from the URL. For example, if I have a route where all the parameters are required, I could use a constraint on the required parameters to say that the parameter is non-empty. Here’s what the constraint would look like: Notice that this is a class that inherits from IRouteConstraint, which is an interface provided by System.Web.Routing. The match method returns true if the value is a match (and can be further processed by the routing rules) or false if it does not match (and the route will be matched further along the route collection). Because routing constraints are so essential to the route matching process, it was important that they be part of my JavaScript routing engine. But the problem is that we need to somehow represent the constraint in JavaScript. I made a design decision early on that you MUST put this constraint into JavaScript to match a route. I didn’t want to have server interaction for the URL generation, like I’ve seen in so many applications. While this is easy to maintain, it causes maintenance issues in my opinion. So the way constraints work in JavaScript is that the constraint as an object type definition is set on the route manager. When a route is created, a new instance of the constraint is created with the specific parameter. In its current form the constraint function MUST return a function that takes the route data and will return true or false. You will see the NotEmpty constraint in a bit. Another piece to the puzzle is that you can have the JavaScript exist as a string in your application that is pulled in when the routing JavaScript code is generated. There is a simple interface, IJavaScriptAddition, that I have added that will be used to output custom JavaScript. Let’s put it all together. Here is the NotEmpty constraint. There’s a few things at work here. The constraint is called “notEmpty” in JavaScript. When you add the constraint to a parameter in your C# code, the route manager generator will look for the JsConstraint attribute to look for the name of the constraint type name and fallback to the class name. For example, if I didn’t apply the “JsConstraint” attribute, the constraint would be called “NotEmpty”. The JavaScript code essentially adds a function to the “constraintTypeDefs” object on the “notEmpty” property (this is how constraints are added to routes). The function returns another function that will be invoked with routing data. Here’s how you would use the NotEmpty constraint in C# and it will work with the JavaScript routing generator. The only catch to using route constraints currently is that the following is not supported: The constraint will work in C# but is not supported by my JavaScript routing engine. (I take pull requests so if you’d like this… go ahead and implement it).   I just wanted to take this post to explain a little bit about the background on constraints. I am looking at expanding the current functionality, but for now this is a good start. Thanks for all the support with the JavaScript router. Keep the feedback coming!

    Read the article

  • Installer Reboots at "Detecting hardware" (disks and other hardware) on all recent Server Installs

    - by Ryan Rosario
    I have a very frustrating problem with my PC. I cannot install any recent version of Ubuntu Server (or even Desktop) since 9.04 even using the text-based installer. I boot from a USB stick created by Unetbootin (I also tried other methods such as startup disk creator with no difference). On the Server installer, it gets to "Detecting Hardware" (the second one about disks and all other hardware, not network hardware) and then either hangs at 0% (waited 24 hours), or reboots after a minute or two. My system (late 2007): ASUS P5NSLI motherboard Intel Core 2 Duo E6600 2.4Ghz 2 x 1GB Corsair 667MHz RAM nVidia GeForce 6600 I have unplugged everything (including the only hard disk, CD-ROMs and floppy). I have only one stick of RAM (tried each one to no avail) and am booting the installer from a USB stick (booting from CD-ROM yields the same problem). I also tried several of the boot options (nomodeset, nousb, acpi=off, noapic, i915.modeset=1/0, xforcevesa) in all combinations) to no avail. The only active parts of my system are the video card, mouse, keyboard and USB stick. I have also updated the BIOS to the most recent version. (FWIW, on the Desktop installer, I get a black screen after hitting the Install option.) Even after removing "quiet" I am unable to see what kernel panic is occurring (or not occurring) to cause the install to crash. I am only able to save the debug logs via a simple webserver in the installer. After the last line (I repeatedly refreshed), the server stops responding and the installer hangs or reboots: Jan 2 01:04:03 main-menu[302]: INFO: Menu item 'disk-detect' selected Jan 2 01:04:04 kernel: [ 309.154372] sata_nv 0000:00:0e.0: version 3.5 Jan 2 01:04:04 kernel: [ 309.154409] sata_nv 0000:00:0e.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.154531] sata_nv 0000:00:0e.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.164442] scsi0 : sata_nv Jan 2 01:04:04 kernel: [ 309.167610] scsi1 : sata_nv Jan 2 01:04:04 kernel: [ 309.167762] ata1: SATA max UDMA/133 cmd 0x9f0 ctl 0xbf0 bmdma 0xd400 irq 10 Jan 2 01:04:04 kernel: [ 309.167774] ata2: SATA max UDMA/133 cmd 0x970 ctl 0xb70 bmdma 0xd408 irq 10 Jan 2 01:04:04 kernel: [ 309.167948] sata_nv 0000:00:0f.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.168071] sata_nv 0000:00:0f.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.171931] scsi2 : sata_nv Jan 2 01:04:04 kernel: [ 309.173793] scsi3 : sata_nv Jan 2 01:04:04 kernel: [ 309.173943] ata3: SATA max UDMA/133 cmd 0x9e0 ctl 0xbe0 bmdma 0xe800 irq 11 Jan 2 01:04:04 kernel: [ 309.173954] ata4: SATA max UDMA/133 cmd 0x960 ctl 0xb60 bmdma 0xe808 irq 11 Jan 2 01:04:04 kernel: [ 309.174061] pata_amd 0000:00:0d.0: version 0.4.1 Jan 2 01:04:04 kernel: [ 309.174160] pata_amd 0000:00:0d.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.177045] scsi4 : pata_amd Jan 2 01:04:04 kernel: [ 309.178628] scsi5 : pata_amd Jan 2 01:04:04 kernel: [ 309.178801] ata5: PATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xf000 irq 14 Jan 2 01:04:04 kernel: [ 309.178811] ata6: PATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xf008 irq 15 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface lo Jan 2 01:04:04 kernel: [ 309.485062] ata3: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:04 kernel: [ 309.633094] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jan 2 01:04:04 kernel: [ 309.641647] ata1.00: ATA-8: ST31000528AS, CC38, max UDMA/133 Jan 2 01:04:04 kernel: [ 309.641658] ata1.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 31/32) Jan 2 01:04:04 kernel: [ 309.657614] ata1.00: configured for UDMA/133 Jan 2 01:04:04 kernel: [ 309.657969] scsi 0:0:0:0: Direct-Access ATA ST31000528AS CC38 PQ: 0 ANSI: 5 Jan 2 01:04:04 kernel: [ 309.658482] sd 0:0:0:0: Attached scsi generic sg0 type 0 Jan 2 01:04:04 kernel: [ 309.658588] sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) Jan 2 01:04:04 kernel: [ 309.658812] sd 0:0:0:0: [sda] Write Protect is off Jan 2 01:04:04 kernel: [ 309.658823] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 2 01:04:04 kernel: [ 309.658918] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 2 01:04:04 kernel: [ 309.675630] sda: sda1 sda2 Jan 2 01:04:04 kernel: [ 309.676440] sd 0:0:0:0: [sda] Attached SCSI disk Jan 2 01:04:05 kernel: [ 309.969102] ata2: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:05 kernel: [ 310.281137] ata4: SATA link down (SStatus 0 SControl 300) Anybody have any additional ideas I could try? I am getting ready to just toss the motherboard.

    Read the article

  • Windows Presentation Foundation 4.5 Cookbook Review

    - by Ricardo Peres
    As promised, here’s my review of Windows Presentation Foundation 4.5 Cookbook, that Packt Publishing kindly made available to me. It is an introductory book, targeted at WPF newcomers or users with few experience, following the typical recipes or cookbook style. Like all Packt Publishing books on development, each recipe comes with sample code that is self-sufficient for understanding the concepts it tries to illustrate. It starts on chapter 1 by introducing the most important concepts, the XAML language itself, what can be declared in XAML and how to do it, what are dependency and attached properties as well as markup extensions and events, which should give readers a most required introduction to how WPF works and how to do basic stuff. It moves on to resources on chapter 2, which also makes since, since it’s such an important concept in WPF. Next, chapter 3, come the panels used for laying controls on the screen, all of the out of the box panels are described with typical use cases. Controls come next in chapter 4; the difference between elements and controls is introduced, as well as content controls, headered controls and items controls, and all standard controls are introduced. The book shows how to change the way they look by using templates. The next chapter, 5, talks about top level windows and the WPF application object: how to access startup arguments, how to set the main window, using standard dialogs and there’s even a sample on how to have a irregularly-shaped window. This is one of the most important concepts in WPF: data binding, which is the theme for the following chapter, 6. All common scenarios are introduced, the binding modes, directions, triggers, etc. It talks about the INotifyPropertyChanged interface and how to use it for notifying data binding subscribers of changes in data sources. Data templates and selectors are also covered, as are value converters and data triggers. Examples include master-detail and sorting, grouping and filtering collections and binding trees and grids. Last it covers validation rules and error templates. Chapter 7 talks about the current trend in WPF development, the Model View View-Model (MVVM) framework. This is a well known pattern for connecting things interface to actions, and it is explained competently. A typical implementation is presented which also presents the command pattern used throughout WPF. A complete application using MVVM is presented from start to finish, including typical features such as undo. Style and layout is covered on chapter 8. Why/how to use styles, applying them automatically,  using the many types of triggers to change styles automatically, using Expression Blend behaviors and templates are all covered. Next chapter, 9, is about graphics and animations programming. It explains how to create shapes, transform common UI elements, apply special effects and perform simple animations. The following chapter, 10, is about creating custom controls, either by deriving from UserControl or from an existing control or framework element class, applying custom templates for changing the way the control looks. One useful example is a custom layout panel that arranges its children along a circumference. The final chapter, 11, is about multi-threading programming and how one can integrate it with WPF. Includes how to invoke methods and properties on WPF classes from threads other than the main UI, using background tasks and timers and even using the new C# 5.0 asynchronous operations. It’s an interesting book, like I said, mostly for newcomers. It provides a competent introduction to WPF, with examples that cover the most common scenarios and also give directions to more complex ones. I recommend it to everyone wishing to learn WPF.

    Read the article

  • No HDMI Audio with GeForce 9600GT and nForce board

    - by Bobby
    I've been trying to get HDMI with sound working for the last few days, and I'm a little bit out of ideas. (I've verified that the hardware/Setup works via Windows.) aplay does not list my HDMI device: $ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: NVidia [HDA NVidia], device 0: ALC662 rev1 Analog [ALC662 rev1 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: NVidia [HDA NVidia], device 1: ALC662 rev1 Digital [ALC662 rev1 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 I've already compiled the alsa drivers (1.0.24) from a snapshot (with --with-oss=no) and added the line options snd-hda-intel model=auto # Tried 3stack-dig and 6stack-dig too to /etc/modprobe.d/alsa-base.conf. Still, the device does not show up. If it is important, the HDMI TV is at the moment not configured to be part of the X session (I've tried that to, at least with X restart, and it didn't change anything). What did I miss? $ lspci 00:00.0 Host bridge: nVidia Corporation Device 07c3 (rev a2) 00:00.1 RAM memory: nVidia Corporation nForce 630i memory controller (rev a2) 00:01.0 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.1 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.2 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.3 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.4 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.5 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:01.6 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:02.0 RAM memory: nVidia Corporation nForce 630i memory controller (rev a1) 00:03.0 ISA bridge: nVidia Corporation MCP73 LPC Bridge (rev a2) 00:03.1 SMBus: nVidia Corporation MCP73 SMBus (rev a1) 00:03.2 RAM memory: nVidia Corporation MCP73 Memory Controller (rev a1) 00:03.4 RAM memory: nVidia Corporation MCP73 Memory Controller (rev a1) 00:04.0 USB Controller: nVidia Corporation GeForce 7100/nForce 630i USB (rev a1) 00:04.1 USB Controller: nVidia Corporation MCP73 [nForce 630i] USB 2.0 Controller (EHCI) (rev a1) 00:08.0 IDE interface: nVidia Corporation MCP73 IDE (rev a1) 00:09.0 Audio device: nVidia Corporation MCP73 High Definition Audio (rev a1) 00:0a.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0b.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0c.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0d.0 PCI bridge: nVidia Corporation MCP73 PCI Express bridge (rev a1) 00:0e.0 IDE interface: nVidia Corporation MCP73 IDE (rev a2) 00:0f.0 Ethernet controller: nVidia Corporation MCP73 Ethernet (rev a2) 02:00.0 VGA compatible controller: nVidia Corporation G94 [GeForce 9600 GT] (rev a1)   $ aplay -L default pulse Playback/recording through the PulseAudio sound server front:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Front speakers surround40:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Digital IEC958 (S/PDIF) Digital Audio Output dmix:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample mixing device dmix:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample mixing device dsnoop:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct sample snooping device dsnoop:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct sample snooping device hw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Direct hardware device without any conversions hw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Direct hardware device without any conversions plughw:CARD=NVidia,DEV=0 HDA NVidia, ALC662 rev1 Analog Hardware device with all software conversions plughw:CARD=NVidia,DEV=1 HDA NVidia, ALC662 rev1 Digital Hardware device with all software conversions

    Read the article

  • Parallelism in .NET – Part 19, TaskContinuationOptions

    - by Reed
    My introduction to Task continuations demonstrates continuations on the Task class.  In addition, I’ve shown how continuations allow handling of multiple tasks in a clean, concise manner.  Continuations can also be used to handle exceptional situations using a clean, simple syntax. In addition to standard Task continuations , the Task class provides some options for filtering continuations automatically.  This is handled via the TaskContinationOptions enumeration, which provides hints to the TaskScheduler that it should only continue based on the operation of the antecedent task. This is especially useful when dealing with exceptions.  For example, we can extend the sample from our earlier continuation discussion to include support for handling exceptions thrown by the Factorize method: // Get a copy of the UI-thread task scheduler up front to use later var uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); // Start our task var factorize = Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }); // When we succeed, report the results to the UI factorize.ContinueWith(task => textBox1.Text = string.Format("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result), CancellationToken.None, TaskContinuationOptions.NotOnFaulted, uiScheduler); // When we have an exception, report it factorize.ContinueWith(task => textBox1.Text = string.Format("Error: {0}", task.Exception.Message), CancellationToken.None, TaskContinuationOptions.OnlyOnFaulted, uiScheduler); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code works by using a combination of features.  First, we schedule our task, the same way as in the previous example.  However, in this case, we use a different overload of Task.ContinueWith which allows us to specify both a specific TaskScheduler (in order to have your continuation run on the UI’s synchronization context) as well as a TaskContinuationOption.  In the first continuation, we tell the continuation that we only want it to run when there was not an exception by specifying TaskContinuationOptions.NotOnFaulted.  When our factorize task completes successfully, this continuation will automatically run on the UI thread, and provide the appropriate feedback. However, if the factorize task has an exception – for example, if the Factorize method throws an exception due to an improper input value, the second continuation will run.  This occurs due to the specification of TaskContinuationOptions.OnlyOnFaulted in the options.  In this case, we’ll report the error received to the user. We can use TaskContinuationOptions to filter our continuations by whether or not an exception occurred and whether or not a task was cancelled.  This allows us to handle many situations, and is especially useful when trying to maintain a valid application state without ever blocking the user interface.  The same concepts can be extended even further, and allow you to chain together many tasks based on the success of the previous ones.  Continuations can even be used to create a state machine with full error handling, all without blocking the user interface thread.

    Read the article

  • View Weather Underground Forecasts in Google Chrome

    - by Asian Angel
    If you like a simple straightforward interface for keeping up with weather forecasts then join us as we look at the Weather Underground extension for Google Chrome. Weather Underground in Action As soon as you click on the “Toolbar Icon” you will need to enter a location. Keep in mind that you will need to enter the “city and country” if using that option. Going with less information will yield an “error”. Note: The extension did not work for some Asian locations during our tests. In honor of the Olympics we chose Vancouver, Canada. You can hover over the “Toolbar Button” to see the current conditions or click to view the current day’s conditions, the current day’s forecast, and the forecast for the following three days. It is a simple straightforward interface. Note: There are no options to worry with. Clicking on the “Detailed Forecast Link” in the drop-down window will take you to the Weather Underground webpage for your location. Clicking on the “Weather Underground Link” in the drop-down window will take you to the Weather Underground U.S. Homepage. Additional Weather Underground Fun Since we were focusing on Weather Underground we have an extra bit of fun for you. If you love being able to view a “large scale” map of your location with current conditions and forecast combined then you might want to have a look at Weather Underground’s “wxmap webpage”. Using the link below you can access the basic starting page where you will be asked to enter your location. Once you have entered the information you will see the default “Terrain View” for your location and a “Current Conditions & Forecast Window” in the lower left corner. You can modify how your map looks by choosing from “Temperature, Precipitation, Clouds, Satellite, Hybrid, & Terrain” views. Going full screen in your browser with this gives your monitor a wonderful and unique look that will have your family & friends asking you how you did it. Note: Terrain View shown here. Clicking on the “Settings Link” in the upper left corner will let you tweak your map view very nicely. Conclusion If you love using Weather Underground for your weather forecasts then you can add a “double dose” of goodness to your browser. Links Download the Weather Underground extension (Google Chrome Extensions) Access the Full Screen Weather Underground Map & Forecast for your area Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeMonitor the Weather for Your Location in ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • Where’s my MD.050?

    - by Dave Burke
    A question that I’m sometimes asked is “where’s my MD.050 in OUM?” For those not familiar with an MD.050, it serves the purpose of being a Functional Design Document (FDD) in one of Oracle’s legacy Methods. Functional Design Documents have existed for many years with their primary purpose being to describe the functional aspects of one or more components of an IT system, typically, a Custom Extension of some sort. So why don’t we have a direct replacement for the MD.050/FDD in OUM? In simple terms, the disadvantage of the MD.050/FDD approach is that it tends to lead practitioners into “Design mode” too early in the process. Whereas OUM encourages more emphasis on gathering, and describing the functional requirements of a system ahead of the formal Analysis and Design process. So that just means more work up front for the Business Analyst or Functional Consultants right? Well no…..the design of a solution, particularly when it involves a complex custom extension, does not necessarily take longer just because you put more thought into the functional requirements. In fact, one could argue the complete opposite, in that by putting more emphasis on clearly understanding the nuances of functionality requirements early in the process, then the overall time and cost incurred during the Analysis to Design process should be less. In short, as your understanding of requirements matures over time, it is far easier (and more cost effective) to update a document or a diagram, than to change lines of code. So how does that translate into Tasks and Work Products in OUM? Let us assume you have reached a point on a project where a Custom Extension is needed. One of the first things you should consider doing is creating a Use Case, and remember, a Use Case could be as simple as a few lines of text reflecting a “User Story”, or it could be what Cockburn1 describes a “fully dressed Use Case”. It is worth mentioned at this point the highly scalable nature of OUM in the sense that “documents” should not be produced just because that is the way we have always done things. Some projects may well be predicated upon a base of electronic documents, whilst other projects may take a much more Agile approach to describing functional requirements; through “User Stories” perhaps. In any event, it is quite common for a Custom Extension to involve the creation of several “components”, i.e. some new screens, an interface, a report etc. Therefore several Use Cases might be required, which in turn can then be assembled into a Use Case Package. Once you have the Use Cases attributed to an appropriate (fit-for-purpose) level of detail, and assembled into a Package, you can now create an Analysis Model for the Package. An Analysis Model is conceptual in nature, and depending on the solution being developing, would involve the creation of one or more diagrams (i.e. Sequence Diagrams, Collaboration Diagrams etc.) which collectively describe the Data, Behavior and Use Interface requirements of the solution. If required, the various elements of the Analysis Model may be indexed via an Analysis Specification. For Custom Extension projects that follow a pure Object Orientated approach, then the Analysis Model will naturally support the development of the Design Model without any further artifacts. However, for projects that are transitioning to this approach, then the various elements of the Analysis Model may be represented within the Analysis Specification. If we now return to the original question of “Where’s my MD.050”. The full answer would be: Capture the functional requirements within a Use Case Group related Use Cases into a Package Create an Analysis Model for each Package Consider creating an Analysis Specification (AN.100) as a index to each Analysis Model artifact An alternative answer for a relatively simple Custom Extension would be: Capture the functional requirements within a Use Case Optionally, group related Use Cases into a Package Create an Analysis Specification (AN.100) for each package 1 Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >