Search Results

Search found 26977 results on 1080 pages for 'input device'.

Page 724/1080 | < Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >

  • Oracle Linux 6 DVDs Now Available

    - by sergio.leunissen
    On Sunday 6 February 2011, Oracle Linux 6 was released on the Unbreakable Linux Network for customers with an Oracle Linux support subscription. Shortly after that, the Oracle Linux 6 RPMs were made available on our public yum server. Today we published the installation DVD images on edelivery.oracle.com/linux. Oracle Linux 6 is free to download, install and use. The full release notes are here, but similar to my recent post about Oracle Linux 5.6, I wanted to highlight a few items about this release. Unbreakable Enterprise Kernel As is the case with Oracle Linux 5.6, the default installed kernel on x86_64 platform in Oracle Linux 6 is the Unbreakable Enterprise Kernel. If you haven't already, I highly recommend you watch the replay of this webcast by Chris Mason on the performance improvements made in this kernel. # uname -r 2.6.32-100.28.5.el6.x86_64 The Unbreakable Enterprise Kernel is delivered via the package kernel-uek: [root@localhost ~]# yum info kernel-uek ... Installed Packages Name : kernel-uek Arch : x86_64 Version : 2.6.32 Release : 100.28.5.el6 Size : 84 M Repo : installed From repo : anaconda-OracleLinuxServer-201102031546.x86_64 Summary : The Linux kernel URL : http://www.kernel.org/ License : GPLv2 Description: The kernel package contains the Linux kernel (vmlinuz), the core of : any Linux operating system. The kernel handles the basic functions : of the operating system: memory allocation, process allocation, : device input and output, etc. ext4 file system The ext4 or fourth extended filesystem replaces ext3 as the default filesystem in Oracle Linux 6. # mount /dev/mapper/VolGroup-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Red Hat compatible kernel Oracle Linux 6 also includes a Red Hat compatible kernel built directly from RHEL source. It's already installed, so booting it is a matter of editing /etc/grub.conf # rpm -qa | grep kernel-2.6.32 kernel-2.6.32-71.el6.x86_64 Oracle Linux 6 no longer includes a Red Hat compatible kernel with Oracle bug fixes. The only Red Hat compatible kernel included is the one built directly from RHEL source. Yum-only access to Unbreakable Linux Network (ULN) Oracle Linux 6 uses yum exclusively for access to Unbreakable Linux Network. To register your system with ULN, use the following command: # uln_register No Itanium Support Oracle Linux 6 is not supported on the Itanium (ia64) platform. Next Steps Read the release notes Download Oracle Linux 6 for free Discuss on the Oracle Linux forum

    Read the article

  • Upgrade 10g Osso to 11g OAM (Part 2)

    - by Pankaj Chandiramani
    This is part 2 of http://blogs.oracle.com/pankaj/2010/11/upgrade_10g_osso_to_11g_oam.html So last post we saw the overview of upgrading osso to oam11g . Now some more details on same . As we are using the co-existence feature , we have to install the OAM server and upgrade the existing OSSO 10g server to the OAM servers. OAM Upgrade Steps Overview Pre-Req : You already have a OAM 11g Installed Upgrade Step 1: Configure User Store & Make it Primary Upgrade Step 2: Create Policy Domain , this is dome by UA automatically Upgrade Step 3: Migrate Partners : This is done by running Upgrade Assistant Verify successful Upgrade Details on UA step : To Upgrade the existing OSSO 10g servers to OAM server , this is done by running the UA script in OAM , which copies over all the partner app details from osso to OAM 11g , run_ua.sh is the script name which will ask you to input the Policies.properties from SSO $OH/sso/config folder of osso 10g & other variables like db password . Some pointers Upgrading oso to Oam 11g , by default enables the coexistence mode on the OAM Server Front-end the OAM server with the same Load Balancer that is the front end of the OSSO 10g servers. Now, OAM and OSSO 10g servers are working in a co-exist mode. OAM 11g is made to understand 10g OSSO Token format and session handling capabilities so as to co-exist with 10g OSSO servers./li How to test ? Try to access the partner applications and verify that single sign on works. Also, verify that user does not have to login in if the user is already authenticated by either OAM or OSSO 10g server. Screen-shots & Troubleshooting tips to be followed .......

    Read the article

  • $PATH issues with OSX Lion

    - by Mikey
    I'm having some issues with running mysql from terminal: macmini:~ michael$ which mysql /Applications/XAMPP/xamppfiles/bin/mysql macmini:~ michael$ mysql -bash: /usr/local/mysql/bin/mysql: No such file or directory I had a previous installation at /usr/local/mysql/bin/mysql which no longer exists. My path variable is as follows: macmini:~ michael$ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/Applications/XAMPP/xamppfiles/bin/:/usr/local/bin:/usr/X11/bin:/usr/local/MacGPG2/bin:/usr/texbin Dropping to root seems to function correctly: macmini:~ michael$ sudo bash Password: bash-3.2# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 66 Server version: 5.1.44 Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> I seem to have found the issue - but I'm not sure how to change or remove this alias macmini:~ michael$ type -a mysql mysql is aliased to `/usr/local/mysql/bin/mysql' mysql is /Applications/XAMPP/xamppfiles/bin/mysql mysql is /Applications/XAMPP/xamppfiles/bin/mysql

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • "Not enough memory" error when running Peachtree Accounting 2010 with Logmein

    - by Justin Bowers
    I setup one of my clients with a Logmein Pro account recently and everything has been running fine so far. Today, however, my client received an error when they were logged into their work computer remotely and tried to run a report from Peachtree Accounting 2010. The error states: "Not enough memory to run" and she can't print the report. I looked online and found another user that had this same problem with Peachtree 2010 when connected via Logmein here. The error doesn't happen if the user runs the report while sitting at her work computer, but it always happens when connected via Logmein. I know most of you will push it off as just a Logmein issue, which I'm sure it is - but I would like any constructive input from anyone else who may have had the same or similar problem. Thanks, Justin

    Read the article

  • Webmin and/or port 10000 not working

    - by DisgruntledGoat
    I've recently installed Webmin on a Ubuntu server but I can't get it to work. I asked a recent question about saving iptables but it turns out you don't need to "save" iptables changes. Anyway, I still can't get Webmin working after opening the port up: iptables -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT It seems that either the command is not opening up port 10000, or there is a separate problem with Webmin. If I run iptables -L I see lines like the following, but no port 10000: ACCEPT tcp -- anywhere anywhere tcp dpt:5555 state NEW ACCEPT tcp -- anywhere anywhere tcp dpt:8002 state NEW ACCEPT tcp -- anywhere anywhere tcp dpt:9001 state NEW However, there is a line: ACCEPT tcp -- anywhere anywhere tcp dpt:webmin Any ideas why Webmin is not working? The IP address works fine and we can view web sites on the server, but https://[ip]:10000/ (or http) doesn't work.

    Read the article

  • Windows Vista language text service problem

    - by Azho KG
    Hi, All I'm using English version of Vista and having problems with using programs that display Russian characters somewhere. For example dictionaries doesn't work for me, since they display Russian character. Also I see just "magic" characters in text editor (notepad) when open a Russian text file. I tried to change whole Vista Interface language to Russian, but it still didn't solve the problem. I CAN read any web page from browser, that's not a problem. Also adding "Russian" in "Text Services and Input Languages" doesn't solve this problem. Does anyone know how to solve this? Thanks. My System: 32-bit Windows Vista Home Premium - SP2

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Accessing and Updating Data in ASP.NET: Filtering Data Using a CheckBoxList

    Filtering Database Data with Parameters, an earlier installment in this article series, showed how to filter the data returned by ASP.NET's data source controls. In a nutshell, the data source controls can include parameterized queries whose parameter values are defined via parameter controls. For example, the SqlDataSource can include a parameterized SelectCommand, such as: SELECT * FROM Books WHERE Price > @Price. Here, @Price is a parameter; the value for a parameter can be defined declaratively using a parameter control. ASP.NET offers a variety of parameter controls, including ones that use hard-coded values, ones that retrieve values from the querystring, and ones that retrieve values from session, and others. Perhaps the most useful parameter control is the ControlParameter, which retrieves its value from a Web control on the page. Using the ControlParameter we can filter the data returned by the data source control based on the end user's input. While the ControlParameter works well with most types of Web controls, it does not work as expected with the CheckBoxList control. The ControlParameter is designed to retrieve a single property value from the specified Web control, but the CheckBoxList control does not have a property that returns all of the values of its selected items in a form that the CheckBoxList control can use. Moreover, if you are using the selected CheckBoxList items to query a database you'll quickly find that SQL does not offer out of the box functionality for filtering results based on a user-supplied list of filter criteria. The good news is that with a little bit of effort it is possible to filter data based on the end user's selections in a CheckBoxList control. This article starts with a look at how to get SQL to filter data based on a user-supplied, comma-delimited list of values. Next, it shows how to programmatically construct a comma-delimited list that represents the selected CheckBoxList values and pass that list into the SQL query. Finally, we'll explore creating a custom parameter control to handle this logic declaratively. Read on to learn more! Read More >

    Read the article

  • ASP.NET MVC Postbacks and HtmlHelper Controls ignoring Model Changes

    - by Rick Strahl
    So here's a binding behavior in ASP.NET MVC that I didn't really get until today: HtmlHelpers controls (like .TextBoxFor() etc.) don't bind to model values on Postback, but rather get their value directly out of the POST buffer from ModelState. Effectively it looks like you can't change the display value of a control via model value updates on a Postback operation. To demonstrate here's an example. I have a small section in a document where I display an editable email address: This is what the form displays on a GET operation and as expected I get the email value displayed in both the textbox and plain value display below, which reflects the value in the mode. I added a plain text value to demonstrate the model value compared to what's rendered in the textbox. The relevant markup is the email address which needs to be manipulated via the model in the Controller code. Here's the Razor markup: <div class="fieldcontainer"> <label> Email: &nbsp; <small>(username and <a href="http://gravatar.com">Gravatar</a> image)</small> </label> <div> @Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield"}) @Model.User.Email </div> </div>   So, I have this form and the user can change their email address. On postback the Post controller code then asks the business layer whether the change is allowed. If it's not I want to reset the email address back to the old value which exists in the database and was previously store. The obvious thing to do would be to modify the model. Here's the Controller logic block that deals with that:// did user change email? if (!string.IsNullOrEmpty(oldEmail) && user.Email != oldEmail) { if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; } else // allow email change but require verification by forcing a login user.IsVerified = false; }… model.user = user; return View(model); The logic is straight forward - if the new email address is not valid because it already exists I don't want to display the new email address the user entered, but rather the old one. To do this I change the value on the model which effectively does this:model.user.Email = oldEmail; return View(model); So when I press the Save button after entering in my new email address ([email protected]) here's what comes back in the rendered view: Notice that the textbox value and the raw displayed model value are different. The TextBox displays the POST value, the raw value displays the actual model value which are different. This means that MVC renders the textbox value from the POST data rather than from the view data when an Http POST is active. Now I don't know about you but this is not the behavior I expected - initially. This behavior effectively means that I cannot modify the contents of the textbox from the Controller code if using HtmlHelpers for binding. Updating the model for display purposes in a POST has in effect - no effect. (Apr. 25, 2012 - edited the post heavily based on comments and more experimentation) What should the behavior be? After getting quite a few comments on this post I quickly realized that the behavior I described above is actually the behavior you'd want in 99% of the binding scenarios. You do want to get the POST values back into your input controls at all times, so that the data displayed on a form for the user matches what they typed. So if an error occurs, the error doesn't mysteriously disappear getting replaced either with a default value or some value that you changed on the model on your own. Makes sense. Still it is a little non-obvious because the way you create the UI elements with MVC, it certainly looks like your are binding to the model value:@Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield",required="required" }) and so unless one understands a little bit about how the model binder works this is easy to trip up. At least it was for me. Even though I'm telling the control which model value to bind to, that model value is only used initially on GET operations. After that ModelState/POST values provide the display value. Workarounds The default behavior should be fine for 99% of binding scenarios. But if you do need fix up values based on your model rather than the default POST values, there are a number of ways that you can work around this. Initially when I ran into this, I couldn't figure out how to set the value using code and so the simplest solution to me was simply to not use the MVC Html Helper for the specific control and explicitly bind the model via HTML markup and @Razor expression: <input type="text" name="User.Email" id="User_Email" value="@Model.User.Email" /> And this produces the right result. This is easy enough to create, but feels a little out of place when using the @Html helpers for everything else. As you can see by the difference in the name and id values, you also are forced to remember the naming conventions that MVC imposes in order for ModelBinding to work properly which is a pain to remember and set manually (name is the same as the property with . syntax, id replaces dots with underlines). Use the ModelState Some of my original confusion came because I didn't understand how the model binder works. The model binder basically maintains ModelState on a postback, which holds a value and binding errors for each of the Post back value submitted on the page that can be mapped to the model. In other words there's one ModelState entry for each bound property of the model. Each ModelState entry contains a value property that holds AttemptedValue and RawValue properties. The AttemptedValue is essentially the POST value retrieved from the form. The RawValue is the value that the model holds. When MVC binds controls like @Html.TextBoxFor() or @Html.TextBox(), it always binds values on a GET operation. On a POST operation however, it'll always used the AttemptedValue to display the control. MVC binds using the ModelState on a POST operation, not the model's value. So, if you want the behavior that I was expecting originally you can actually get it by clearing the ModelState in the controller code:ModelState.Clear(); This clears out all the captured ModelState values, and effectively binds to the model. Note this will produce very similar results - in fact if there are no binding errors you see exactly the same behavior as if binding from ModelState, because the model has been updated from the ModelState already and binding to the updated values most likely produces the same values you would get with POST back values. The big difference though is that any values that couldn't bind - like say putting a string into a numeric field - will now not display back the value the user typed, but the default field value or whatever you changed the model value to. This is the behavior I was actually expecting previously. But - clearing out all values might be a bit heavy handed. You might want to fix up one or two values in a model but rarely would you want the entire model to update from the model. So, you can also clear out individual values on an as needed basis:if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; ModelState.Remove("User.Email"); } This allows you to remove a single value from the ModelState and effectively allows you to replace that value for display from the model. Why? While researching this I came across a post from Microsoft's Brad Wilson who describes the default binding behavior best in a forum post: The reason we use the posted value for editors rather than the model value is that the model may not be able to contain the value that the user typed. Imagine in your "int" editor the user had typed "dog". You want to display an error message which says "dog is not valid", and leave "dog" in the editor field. However, your model is an int: there's no way it can store "dog". So we keep the old value. If you don't want the old values in the editor, clear out the Model State. That's where the old value is stored and pulled from the HTML helpers. There you have it. It's not the most intuitive behavior, but in hindsight this behavior does make some sense even if at first glance it looks like you should be able to update values from the model. The solution of clearing ModelState works and is a reasonable one but you have to know about some of the innards of ModelState and how it actually works to figure that out.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Is CUDA, cuBLAS or cuBLAS-XT the right place to start with for machine learning?

    - by Stefan R. Falk
    I am not sure if this is the right forum to post this question - but it surely is no question for stackoverflow. I work on my bachelor thesis and therefore I am implementing a so called Echo-State Network which basically is an artificial neural network that has a large reservoir of randomly initialized neurons and just a few input and output neurons .. but I think we can skip that. The thing is, there is a Python library called Theano which I am using for this implementation. It encapsulates the CUDA API and offers a quiet "comfortable" way to access the power of a NVIDIA graphics card. Since CUDA 6.0 there is a sub-library called cuBLAS (Basic Linear Algebra Subroutines) for LinAlg operations and also a cuBLAS-XT an extention which allows to run calculations on multiple graphics cards. My question at this point is if it would make sense to start using cuBLAS and/or cuBLAS-XT right now since the API is quite complex or rather wait for libraries that will build up on those library (such as Theano does on basic CUDA)? If you think this is the wrong place for this question please tell me which one is, thank you.

    Read the article

  • Box 2d basic questions

    - by philipp
    I am a bit new to box2d and I am developing an game with type and letters. I am using an svg font and generate the box2d bodies direct from the glyphs path definition, using the convex hull of them. I also have an decomposition routine the decomposes this hull if necessary. All this it is more or less working, except that I got some strange errors which definitely are caused by the scale factors. The problem is caused by two factors: first: the world scale of box2d, second: the the precision of curve-approximation of the glyph vectors. So through scaling down the input vertices for box2d, it happens that they become equal caused by numerical precision, what causes errors in box2d. Through scaling the my glyphs a bit up, this goes away. I also goes away if I chose a different world scale factor, but this slows down the whole animation quite much! So if my view port is about 990px * 600px and i want to animate Glyphs in box2d which should have a size from about 50px * 50px up to 300px * 300px, which scale factor of the b2world should i choose? How small should the smallest distance from on vertex to another be, while approximating the glyph vectors? Thanks for help greetings philipp EDIT:: I continued reading the docs of box2d and after rethinking of the units system, which is designed to handle object from 0.1 up to 10 meters, I calculated a scale factor of 75. So Objects 600px width will are 8 meters wide in box2d and even small objects of about 20px width will become 0.26 meters width in box2d. I will go on trying with this values, but if there is somebody out there who could give me a clever advice i would be happy!

    Read the article

  • In WHM - Addon Domain Matches Primary Domain From Separate Account - I can't delete the addon domain

    - by Joshua Riddle
    I have an account (domian1.com) with the an addon domain (domain2.com) that matches the primary domain (domain2.com) of a separate account. I believe it was created renaming the primary domain of the second account. I want to delete the addon domain from domain1.com. However when I try i get the error: Error from park wrapper: Sorry, you do not control the domain domain2.com Ive tried all the methods in other forum posts and have been unable to successfully remove the addon domain and subdomain from domain1.com. Thanks in advance for all the great input!

    Read the article

  • BAD ARCHIVE MIRROR using PXE BOOT method

    - by omkar
    i m trying to automatically install UBUNTU on a client PC by using the method of PXE BOOT method....my Objectives are below:- i m following the steps given in this link installation using PXE BOOT 1:-the server will have a KICKSTART config file which contains the parameters for the OS installation and the files which are required for the OS installations. 2:-the client will have to detect this configuration along with the setup files and complete the installation without any input from the user. In my server i have installed DHCP3-server,Apache2 and TFTP for helping me with the installation. i have nearly achieved my first objective,i m able to boot my client using the files stored in the server,but during the installation stage it is asking me to "CHOOSE A MIRROR of UBUNTU ARCHIVE".i gave the server's IP address and the path in the server where the files are located but then too its giving me error "BAD ARCHIVE MIRROR". so is it possible that instead of downloading all the files from the internet and storing them on my disk , can i use the files which comes with the UBUNTU-CD, and how to store this files in what format (should i zip them ) on the disk. secondly i am also generating the ks.cfg which i wanted to give to the client for automatic installation of the OS ,so how should the configuration file be given to the installation process.

    Read the article

  • Building a regex builder

    - by i.h4d35
    I am a beginner in programming in general and web development in particular. I am especially bad at regular expressions. Recently I was involved in building a couple of cPanel plugins(Perl-CGI) and that's when I realized how bad I am in regex. As a result, I have decided to build an online regex builder - this will help me to learn regex and help other struggling with regex. I have checked out GSkinner, Rubular and a couple of others like regexpal. It seemed to be a little difficult to use, hence i thought of writing another one. I do not know which tool is best suited for the job. should I write it in Perl or Python? My skill level is between beginner and intermediate in both those languages. What would be a good starting point - building it for the CLI or for the browser? I plan to get a string as an input, ask if the user want to search or search and replace, enter the search string (and the replace string where applicable) and then generate a regex. Would this be the right way to go?

    Read the article

  • URL Rewrite is adding HTTPS to my canonical redirects in IIS7

    - by Derek Hunziker
    Hello, I have the following rule defined in my Web.config: <rule name="Enforce canonical hostname" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" negate="true" pattern="^www\.mydomain\.org$" /> </conditions> <action type="Redirect" url="http://www.mydomain.com/" redirectType="Permanent" /> </rule> What I am experiencing is strange... It appears that I am being redirected to https://www.mydomain.com/ which causes my browser to hang. I do not have SSL encryption turned on, nor do I have any special authorization rules. The web server in question is behind an F5 load balancer. Any ideas?

    Read the article

  • Ubuntu keyboard detection from bash script

    - by Ryan Brubaker
    Excuse my ignorance of linux OS/hardware issues...I'm just a programmer :) I have an application that calls out to some bash scripts to launch external applications, in this case Firefox. The application runs on a kiosk with touch screen capability. When launching Firefox, I also launch a virtual keyboard application that allows the user to have keyboard input. However, the kiosk also has both PS/2 and USB slots that would allow a user to plug-in a keyboard. If a keyboard were plugged in, it would be nice if I didn't have to launch the virtual keyboard and provide more screen space for the Firefox window. Is there a way for me to detect if a keyboard is plugged in from the bash script? Would it show up in /dev, and if so, would it show up at a consistent location? Would it make a difference if the user used a PS/2 or USB keyboard? Thanks!

    Read the article

  • Windows Phone 7 Design using Expression Blend - Resources

    - by Nikita Polyakov
    I’ve been doing a series of talks across Florida regarding Windows Phone 7 Design using Microsoft Expression Blend 4. I discuss the WP7 phone and application experience; show how to use Expression Blend toolset to effectively design such apps. Next presentation is on 5/4/2010 at 6:30PM EST will be a webcast format over LiveMeeting at Ft. Lauderdale Online group. Registration and the LiveMeeting link are both here: http://www.fladotnet.com/Reg.aspx?EventID=459 [I will post a link if it’s recorded]   Here are the resources from my presentations: The Biggest source is the Windows Phone UI and Design Language video from MIX10 Windows Phone 7 Design Guide as it’s found on the WP7 Dev Home Page Study The Silverlight Mobile Tutorials on official Silverlight website I will be blogging a separate entry for a new demo app that will showcase the elements I presented. I suggest you actually watch all of the MIX videos about SL and Design as great primer to get you thinking the WP7 way.   A lot happening with WP7Dev and it’s just the beginning! So watch these Twitter accounts and blogs: @Ckindel - Charlie Kindel - WP7 Dev Head http://blogs.msdn.com/ckindel @WP7Dev - Official Dev Twitter @WP7 - Official WP7 Twitter Peter Torr - http://blogs.msdn.com/ptorr Mike Harsh - http://blogs.msdn.com/mharsh Shawn Oster - http://www.shawnoster.com   Other worthwhile mention my local friends speaking and blogging about Windows Phone 7: Bill Reiss is doing great presentations on Building games with XNA for Windows Phone 7. Be on the lookout for those around Florida. Bill is a Silverlight MVP and has a legacy of XNA and Silverlight games, see his site. Kevin Wolf aka ByteMaster he is a Device Application Developer MVP with tremendous experience building mobile applications. He has developed WinMo-GF a multi-platform gaming framework. Get these tools and get creating! You will need the following components installed in this order: Expression Blend 4 Beta Windows Phone Developer Tools Microsoft Expression Blend Add-in Preview for Windows Phone Microsoft Expression Blend SDK Preview for Windows Phone Want more training? Don’t forget that Channel 9 has complete walkthroughs of their WP7 Training Kit posted online. PS: To continue with all this design talk check out Microsoft .toolbox “Learn to create Silverlight applications using Expression Studio and to apply fundamental design principles.” A great website with a lot of design tutorials set up as a wonderful full course on design all for free, including a great forum community and neat little avatars you can build yourself.

    Read the article

  • Perform Unit Conversions with the Windows 7 Calculator

    - by Matthew Guay
    Want to easily convert area, volume, temperature, and many other units?  With the Calculator in Windows 7, it’s easy to convert most any unit into another. The New Calculator in Windows 7 Calculator received a visual overhaul in Windows 7, but at first glance it doesn’t seem to have any new functionality.  Here’s Windows 7’s Calculator on the left, with Vista’s calculator on the right.   But, looks can be deceiving.  Window’s 7’s calculator has lots of new exciting features.  Let’s try them out.  Simply type Calculator in the start menu search. To uncover the new features, click the View menu.  Here you can select many different modes, including Unit Conversion mode which we will look at. When you select the Unit Conversion mode, the Calculator will expand with a form on the left side. This conversions pane has 3 drop-down menus.  From the top one, select the type of unit you want to convert. In the next two menus, select which values you wish to convert to and from.  For instance, here we selected Temperature in the first menu, Degrees Fahrenheit in the second menu, and Degrees Celsius in the third menu. Enter the value you wish to convert in the From box, and the conversion will automatically appear in the bottom box. The Calculator contains dozens of conversion values, including more uncommon ones.  So if you’ve ever wanted to know how many US gallons are in a UK gallon, or how many knots a supersonic jet travels in an hour, this is a great tool for you!   Conclusion Windows 7 is filled with little changes that give you an all-around better experience in Windows to help you work more efficiently and productively.  With the new features in the Calculator, you just might feel a little smarter, too! Similar Articles Productive Geek Tips Add Windows Calculator to the Excel 2007 Quick Launch ToolbarEnjoy Quick & Easy Unit Conversion with Convert for WindowsCalculate with Qalculate on LinuxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7Get stats on your Ruby on Rails code TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad

    Read the article

  • Oracle HRMS API –Update Employee Fed Tax Rule

    - by PRajkumar
    API --  pay_federal_tax_rule_api.update_fed_tax_rule Example -- DECLARE    lb_correction                              BOOLEAN;    lb_update                                   BOOLEAN;    lb_update_override                 BOOLEAN;    lb_update_change_insert      BOOLEAN;    ld_effective_start_date            DATE;    ld_effective_end_date             DATE;    ln_assignment_id                     NUMBER                    := 33561;    lc_dt_ud_mode                          VARCHAR2(100)     := NULL;    ln_object_version_number     NUMBER                    := 0;    ln_supp_tax_override_rate    PAY_US_EMP_FED_TAX_RULES_F.SUPP_TAX_OVERRIDE_RATE%TYPE;    ln_emp_fed_tax_rule_id         PAY_US_EMP_FED_TAX_RULES_F.EMP_FED_TAX_RULE_ID%TYPE; BEGIN    -- Find Date Track Mode    -- -------------------------------    dt_api.find_dt_upd_modes    (   -- Input data elements        -- ------------------------------       p_effective_date                   => TO_DATE('12-JUN-2011'),       p_base_table_name            => 'PER_ALL_ASSIGNMENTS_F',       p_base_key_column           => 'ASSIGNMENT_ID',       p_base_key_value               => ln_assignment_id,       -- Output data elements       -- -------------------------------       p_correction                          => lb_correction,       p_update                                => lb_update,       p_update_override              => lb_update_override,       p_update_change_insert   => lb_update_change_insert   );    IF ( lb_update_override = TRUE OR lb_update_change_insert = TRUE )  THEN      -- UPDATE_OVERRIDE      -- --------------------------------      lc_dt_ud_mode := 'UPDATE_OVERRIDE';  END IF;    IF ( lb_correction = TRUE )  THEN     -- CORRECTION     -- ----------------------     lc_dt_ud_mode := 'CORRECTION';  END IF;    IF ( lb_update = TRUE )  THEN      -- UPDATE      -- -------------      lc_dt_ud_mode := 'UPDATE';  END IF;       -- Update Employee Fed Tax Rule   -- ----------------------------------------------   pay_federal_tax_rule_api.update_fed_tax_rule   (   -- Input data elements       -- -----------------------------       p_effective_date                        => TO_DATE('20-JUN-2011'),       p_datetrack_update_mode   => lc_dt_ud_mode,       p_emp_fed_tax_rule_id         => 7417,       p_withholding_allowances  => 100,       p_fit_additional_tax                => 10,       p_fit_exempt                               => 'N',       p_supp_tax_override_rate     => 5,       -- Output data elements       -- --------------------------------      p_object_version_number       => ln_object_version_number,      p_effective_start_date               => ld_effective_start_date,      p_effective_end_date                => ld_effective_end_date   );    COMMIT; EXCEPTION           WHEN OTHERS THEN                          ROLLBACK;                          dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • SQL SERVER – 4 Tips for ETL Software IDE Developers

    - by pinaldave
    In a previous blog, I introduced the notion of Semantic Types. To an end-user, a seamlessly integrated semantic typing engine significantly increases the ease of use of an ETL IDE (integrated development environment, or developer studio). This led me to think about other ease-of-use issues I have encountered while building ETL applications. When I get stumped while programming, I find myself asking the variations on these questions: “How do I…?” “Now what?” “Why isn’t this working?” “Why do I have to redo the work I just did?” It seems to me that a good ETL IDE will anticipate these questions and seek to answer them before they are even asked. So here are my tips to help software vendors build developer IDEs that actually make development easier. How do I…? While developing an ETL application, have you ever asked yourself: “How do I set up the connection to my SQL Server database?”,“How do I import my table definitions from Access?”, etc. An easy answer might be “read the manual” but sometimes product manuals are not robust or easily accessible. So, integrating robust how-to instructions directly into your ETLstudio would help users get the information they need at the time they need it. Now what? IDEs in general know where you last clicked or performed an action using an input device such as a keyboard; so they should be able to reasonably predict the design context you are in and suggest the next steps accordingly. Context-sensitive suggestions based on the state of the user’s work will help users move forward in ETL application development. Why isn’t this working? Or why do I have to wait till I compile to be told about a critical design issue? If an ETL IDE is smart enough to signal to users what in their design structures is left to be completed or has been completed incorrectly, then the developer can spend much less time in the designàcompileàerror-correct loop. Just-in-time validation helps users detect and correct programming errors earlier in the ETL development life cycle. Why do I have to redo the work I just did? In ETL development, schemas, transformation rules, connectivity objects, etc., can be reused in various situations. Using mouse-clicks to build and manage libraries of reusable design objects implies that the application development effort should decrease over time and as the library acquires more objects. I met a great company at SQL Pass that is trying to address many of these usability issues. Check them out at www.expressor-software.com. What other ease-of-use suggestions do you have for ETL software vendors? Please post your valuable comments. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL

    Read the article

  • SQL SERVER – Interview Questions & Answers Needs Your Help

    - by pinaldave
    About an year ago, I had posted SQL Server related Interview Questions and Answers. It was very well received in community. I have received many comments, suggestions and emails on this subject. I am planning to upgrade the Interview Questions and Answers and take it to next level. Here, I need your help. Please your comments, suggestions, expectation or potential interview Question (along with answer) here. Your input will be very valuable. As time goes by we all learn and get better. There were few things missing at that time when those interview questions and answers were prepared, now is the time to complete the gap and make this interview questions more useful. If you know all, this Question and Answers are not for you. This are for those who is eager to learn and need help in the area. If you do not want to leave a comment, I suggest to send me email at pinal “at” SQLAuthority.com Following is the reproduction of original consolidation post for quick reference. SQL SERVER – 2008 – Interview Questions and Answers – Part 1 SQL SERVER – 2008 – Interview Questions and Answers – Part 2 SQL SERVER – 2008 – Interview Questions and Answers – Part 3 SQL SERVER – 2008 – Interview Questions and Answers – Part 4 SQL SERVER – 2008 – Interview Questions and Answers – Part 5 SQL SERVER – 2008 – Interview Questions and Answers – Part 6 SQL SERVER – 2008 – Interview Questions and Answers – Part 7 SQL SERVER – 2008 – Interview Questions and Answers – Part 8 Download SQL Server 2008 Interview Questions and Answers Complete List Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ScrollViewer.EnsureVisible for Windows Phone

    - by Daniel Moth
    In my Translator By Moth app, on both the current and saved pivot pages the need arose to programmatically scroll to the bottom. In the former, case it is when a translation takes place (if the text is too long, I want to scroll to the bottom of the translation so the user can focus on that, and not their input text for translation). In the latter case it was when a new translation is saved (it is added to the bottom of the list, so scrolling is required to make it visible). On both pages a ScrollViewer is used. In my exploration of the APIs through intellisense and msdn I could not find a method that auto scrolled to the bottom. So I hacked together a solution where I added a blank textblock to the bottom of each page (within the ScrollViewer, but above the translated textblock and the saved list) and tried to make it scroll it into view from code. After searching the web I found a little algorithm that did most of what I wanted (sorry, I do not have the reference handy, but thank you whoever it was) that after minor tweaking I turned into an extension method for the ScrollViewer that is very easy to use: this.Scroller.EnsureVisible(this.BlankText); The method itself I share with you here: public static void EnsureVisible(this System.Windows.Controls.ScrollViewer scroller, System.Windows.UIElement uiElem) { System.Diagnostics.Debug.Assert(scroller != null); System.Diagnostics.Debug.Assert(uiElem != null); scroller.UpdateLayout(); double maxScrollPos = scroller.ExtentHeight - scroller.ViewportHeight; double scrollPos = scroller.VerticalOffset - scroller.TransformToVisual(uiElem).Transform(new System.Windows.Point(0, 0)).Y; if (scrollPos > maxScrollPos) scrollPos = maxScrollPos; else if (scrollPos < 0) scrollPos = 0; scroller.ScrollToVerticalOffset(scrollPos); } I am sure there are better ways, but this "worked for me" :-) Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • MS Expression Web 4 SuperPreview – Big Disappointment

    - by smehaffie
    I just downloaded Expression 4 and expected to see some improvements in the Web4 SuperPreview application.  The one main function I was expecting to be in this release is the ability to enter data and click on links so pages of the sites could be assessed.  There a many use cases where this functionality is needed and there were quite a few people vocal about it when MS first released the application. 1) Where you have to login to a site to access either all the content or some of the content on the site 2) Where you have to enter date in a certain order and cannot go to next page until the previous pages data is filled out (payment process, storefront, etc). 3) Where you just want to make sure things are displayed correctly based on data entered (validation messages, etc). 4 ) You need to make sure the links go to the page in all the different browsers.  I have seen scenerios where links worked fine in all but one browser, or for some reason the text showed on screen but it was not a clickable link. IMO this application is a great idea, but until MS fixed the above issue and add the functionality above the SuperPreview is totally worthless unless you need it to test a totally static site that does not require any user input at all to get access to the content.  There is no reason this feature should not have been in this release, and it should have been a priority to make sure it was. Let me know how you feel about the new version of the Web4 SuperPreview application.  Did MS really miss the target on this by not adding this functionality, or do I think it is a bigger deal that it really is?  If you are actively using SuperPreview, please post how you are using it and the type of sites you are using it on.

    Read the article

  • E: mkinitramfs failure cpio 141 gzip 1

    - by Nagaraj Shindagi
    I'm using Ubuntu 12.04 LTS with Dell power-edge R720 server, facing the problem when I apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up linux-image-3.2.0-37-generic-pae (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic-pae Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic-pae /boot/vmlinuz-3.2.0-37-generic-pae update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic-pae gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic-pae with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0 -37-generic-pae.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-37-generic-pae; however: Package linux-image-3.2.0-37-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup erro r from a previous failure. Errors were encountered while processing: linux-image-3.2.0-37-generic-pae linux-image-generic-pae E: Sub-process /usr/bin/dpkg returned an error code (1) ------------ even i tried with apt-get clean apt-get remove apt-get autoremove apt-get purge there is no difference it will show the same error message as above, even i checked the disk space ----------- Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda6 24030076 612456 22196964 3% / udev 16536644 4 16536640 1% /dev tmpfs 6618884 1164 6617720 1% /run none 5120 0 5120 0% /run/lock none 16547208 72 16547136 1% /run/shm cgroup 16547208 0 16547208 0% /sys/fs/cgroup /dev/sda1 93207 75034 13361 85% /boot /dev/sda10 9611492 1096076 8027176 13% /tmp /dev/sda12 9611492 226340 8896912 3% /opt /dev/sda13 9611492 152516 8970736 2% /srv /dev/sda7 9611492 592208 8531044 7% /home /dev/sda8 9611492 2656736 6466516 30% /usr /dev/sda9 9611492 696468 8426784 8% /var /dev/sda14 961237336 134563516 777845764 15% /usr/data /dev/sda15 618991384 84498388 503050052 15% /usr/data1 /dev/sda11 9611492 152616 8970636 2% /usr/local --------------- is there any problem on allotting the space to the partiations please let me know the solution its on urgent please help me on this issue regards

    Read the article

< Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >