Search Results

Search found 841 results on 34 pages for 'balance'.

Page 11/34 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Sticky connection and HTTPS support for HAProxy

    - by Saif
    We have 2 HTTP Load balancer with HAproxy and heartbeat. There are 4 apache nodes in this cluster. It's doing round robin load balancing. The HTTP cluster working fine. We are having problem with our portal because it uses SSO. We need sticky connection support in our HAproxy. Also we need load balancing for HTTPS traffic. Here's our HAproxy conf file. global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app #--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend app listen ha-http 10.190.1.28:80 mode http stats enable stats auth admin:xxxxxx balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /haproxy.txt HTTP/1.0 server apache1 portal-04:80 cookie A check server apache2 im-01:80 cookie B check server apache3 im-02:80 cookie B check server apache4 im-03:80 cookie B check Please advice. Thanks for your help in advance.

    Read the article

  • Network Load Balancing, intermittent port problem

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • Balancing internal services using a Cisco CSS 11501

    - by Ladadadada
    First, the background to the problem: I have a Cisco CSS11501 that I am using to load balance a few web servers. These web servers have two network interfaces, one internal and one external and we are sending the requests to the internal interface. We have the CSS configured to do NAT because our webservers need to see the client's IP address. Because the TCP packets hit the webservers with a source address on the Internet, the webserver tries to send the packet back to the client over the external interface and not through the load balancer. In order to stop these requests being sent back out to the Internet via the external interface, we added a routing rule on these boxes so that all traffic with a source address on the internet will use the load balancer as the gateway. This part works fine. What I would also like to to is use the CSS as a load balancer for internal services such as our MySQL slaves. When I do this, I run into a similar problem; the TCP connection goes from the web server to the load balancer and then from the load balancer to the MySQL slave but the CSS spoofs a source address of the original webserver. The MySQL slave then tries to send the response directly to the webserver via the internal network and not via the load balancer. The ideal solution would be to tell the CSS not to do source address spoofing on the internal network and only do it for requests originating on the Internet. Is this possible ? Failing that, is there a way of directing the load balanced traffic back through the load balancer while keeping the other traffic (say SSH) purely on the internal network ? Is there another way of using the CSS11501 to load balance internal services ?

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • Network Load Balancing, intermittent port problem on Windows Server 2008

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem on a Windows Server 2008 NLB. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • Apache load balancer with https real servers and client certificates

    - by Jack Scheible
    Our network requirements state that ALL network traffic must be encrypted. The network configuration looks like this: ------------ /-- https --> | server 1 | / ------------ |------------| |---------------|/ ------------ | Client | --- https --> | Load Balancer | ---- https --> | server 2 | |------------| |---------------|\ ------------ \ ------------ \-- https --> | server 3 | ------------ And it has to pass client certificates. I've got a config that can do load balancing with in-the-clear real servers: <VirtualHost *:8666> DocumentRoot "/usr/local/apache/ssl_html" ServerName vmbigip1 ServerAdmin [email protected] DirectoryIndex index.html <Proxy *> Order deny,allow Allow from all </Proxy> SSLEngine on SSLProxyEngine On SSLCertificateFile /usr/local/apache/conf/server.crt SSLCertificateKeyFile /usr/local/apache/conf/server.key <Proxy balancer://mycluster> BalancerMember http://1.2.3.1:80 BalancerMember http://1.2.3.2:80 # technically we aren't blocking anyone, but could here Order Deny,Allow Deny from none Allow from all # Load Balancer Settings # A simple Round Robin load balancer. ProxySet lbmethod=byrequests </Proxy> # balancer-manager # This tool is built into the mod_proxy_balancer module allows you # to do simple mods to the balanced group via a gui web interface. <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from all </Location> ProxyRequests Off ProxyPreserveHost On # Point of Balance # Allows you to explicitly name the location in the site to be # balanced, here we will balance "/" or everything in the site. ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID </VirtualHost> What I need is for the servers in my load balancer to be BalancerMember https://1.2.3.1:443 BalancerMember https://1.2.3.2:443 But that does not work. I get SSL negotiation errors. Even when I do get that to work, I will need to pass client certificates. Any help would be appreciated.

    Read the article

  • HAProxy, health checking multiple servers with different host names

    - by Marco Bettiolo
    I need to load balance between multiple running servers with different host names. I cannot set-up the same virtual host on each one. Is it possible to have only one listen configuration with multiple server and make the Health Checks apply the http-send-name-header Host directive? I am using HAProxy 1.5. I came up with this working haproxy.cfg, as you can see, I had to set a different hostname for each health check as the health check ignores the http-send-name-header Host. I would have preferred to use variables or other methods and keep things more concise. global log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 10000 timeout server 10000 stats enable stats uri /haproxy?stats stats refresh 5s balance roundrobin option httpclose listen inbound :80 option httpchk HEAD / HTTP/1.1\r\n server instance1 127.0.0.101 check inter 3000 fall 1 rise 1 server instance2 127.0.0.102 check inter 3000 fall 1 rise 1 listen instance1 127.0.0.101:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.example.com server www.example.com www.example.com:80 check inter 5000 fall 3 rise 2 listen instance2 127.0.0.102:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.bing.com server www.bing.com www.bing.com:80 check inter 5000 fall 3 rise 2

    Read the article

  • Allow more websocket connections

    - by Switz
    I want to load balance my node.js (DerbyJS to be specific) application on a basic Linode (512MB ram). It can probably take more than one process running at once. The querys/database does not concern me as I'm not doing anything intensive. The problem at the moment is that it can only handle up to ~40 websocket connections at once. I would love if I could get that number in the few hundred+ range. I anticipate a lot of traffic on launch due to the fact that it's a highly niche community with an engaged audience, but after it should be fine with just ~20-40 connections at once, which it handles perfectly as of now. I don't mind spending a bit of money for a week or two worth of running, but I also don't want to switch production environments. How can I test the process to see how many instances I am able to run on the box? Will increasing the number of processes increase the amount of websockets I can handle, or is that a limitation of the server's network? I have an old Macbook Pro running Linux sitting next to me that has 2GB ram and a 2.8 Dual Core Processor. Could I use this to handle some of the extra load? I could probably load balance with nginx to its IP. I'm on a FiOS home network. If you have any suggestions, I'd really appreciate it. Thanks

    Read the article

  • Please help me understand why my XSL Transform is not transforming

    - by Damovisa
    I'm trying to transform one XML format to another using XSL. Try as I might, I can't seem to get a result. I've hacked away at this for a while now and I've had no success. I'm not even getting any exceptions. I'm going to post the entire code and hopefully someone can help me work out what I've done wrong. I'm aware there are likely to be problems in the xsl I have in terms of selects and matches, but I'm not fussed about that at the moment. The output I'm getting is the input XML without any XML tags. The transformation is simply not occurring. Here's my XML Document: <?xml version="1.0"?> <Transactions> <Account> <PersonalAccount> <AccountNumber>066645621</AccountNumber> <AccountName>A Smith</AccountName> <CurrentBalance>-200125.96</CurrentBalance> <AvailableBalance>0</AvailableBalance> <AccountType>LOAN</AccountType> </PersonalAccount> </Account> <StartDate>2010-03-01T00:00:00</StartDate> <EndDate>2010-03-23T00:00:00</EndDate> <Items> <Transaction> <ErrorNumber>-1</ErrorNumber> <Amount>12000</Amount> <Reference>Transaction 1</Reference> <CreatedDate>0001-01-01T00:00:00</CreatedDate> <EffectiveDate>2010-03-15T00:00:00</EffectiveDate> <IsCredit>true</IsCredit> <Balance>-324000</Balance> </Transaction> <Transaction> <ErrorNumber>-1</ErrorNumber> <Amount>11000</Amount> <Reference>Transaction 2</Reference> <CreatedDate>0001-01-01T00:00:00</CreatedDate> <EffectiveDate>2010-03-14T00:00:00</EffectiveDate> <IsCredit>true</IsCredit> <Balance>-324000</Balance> </Transaction> </Items> </Transactions> Here's my XSLT: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output method="xml" /> <xsl:param name="currentdate"></xsl:param> <xsl:template match="Transactions"> <xsl:element name="OFX"> <xsl:element name="SIGNONMSGSRSV1"> <xsl:element name="SONRS"> <xsl:element name="STATUS"> <xsl:element name="CODE">0</xsl:element> <xsl:element name="SEVERITY">INFO</xsl:element> </xsl:element> <xsl:element name="DTSERVER"><xsl:value-of select="$currentdate" /></xsl:element> <xsl:element name="LANGUAGE">ENG</xsl:element> </xsl:element> </xsl:element> <xsl:element name="BANKMSGSRSV1"> <xsl:element name="STMTTRNRS"> <xsl:element name="TRNUID">1</xsl:element> <xsl:element name="STATUS"> <xsl:element name="CODE">0</xsl:element> <xsl:element name="SEVERITY">INFO</xsl:element> </xsl:element> <xsl:element name="STMTRS"> <xsl:element name="CURDEF">AUD</xsl:element> <xsl:element name="BANKACCTFROM"> <xsl:element name="BANKID">RAMS</xsl:element> <xsl:element name="ACCTID"><xsl:value-of select="Account/PersonalAccount/AccountNumber" /></xsl:element> <xsl:element name="ACCTTYPE"><xsl:value-of select="Account/PersonalAccount/AccountType" /></xsl:element> </xsl:element> <xsl:element name="BANKTRANLIST"> <xsl:element name="DTSTART"><xsl:value-of select="StartDate" /></xsl:element> <xsl:element name="DTEND"><xsl:value-of select="EndDate" /></xsl:element> <xsl:for-each select="Items/Transaction"> <xsl:element name="STMTTRN"> <xsl:element name="TRNTYPE"><xsl:choose><xsl:when test="IsCredit">CREDIT</xsl:when><xsl:otherwise>DEBIT</xsl:otherwise></xsl:choose></xsl:element> <xsl:element name="DTPOSTED"><xsl:value-of select="EffectiveDate" /></xsl:element> <xsl:element name="DTUSER"><xsl:value-of select="CreatedDate" /></xsl:element> <xsl:element name="TRNAMT"><xsl:value-of select="Amount" /></xsl:element> <xsl:element name="FITID" /> <xsl:element name="NAME"><xsl:value-of select="Reference" /></xsl:element> <xsl:element name="MEMO"><xsl:value-of select="Reference" /></xsl:element> </xsl:element> </xsl:for-each> </xsl:element> <xsl:element name="LEDGERBAL"> <xsl:element name="BALAMT"><xsl:value-of select="Account/PersonalAccount/CurrentBalance" /></xsl:element> <xsl:element name="DTASOF"><xsl:value-of select="EndDate" /></xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:element> </xsl:template> </xsl:stylesheet> Here's my method to transform my XML: public string TransformToXml(XmlElement xmlElement, Dictionary<string, object> parameters) { string strReturn = ""; // Load the XSLT Document XslCompiledTransform xslt = new XslCompiledTransform(); xslt.Load(xsltFileName); // arguments XsltArgumentList args = new XsltArgumentList(); if (parameters != null && parameters.Count > 0) { foreach (string key in parameters.Keys) { args.AddParam(key, "", parameters[key]); } } //Create a memory stream to write to Stream objStream = new MemoryStream(); // Apply the transform xslt.Transform(xmlElement, args, objStream); objStream.Seek(0, SeekOrigin.Begin); // Read the contents of the stream StreamReader objSR = new StreamReader(objStream); strReturn = objSR.ReadToEnd(); return strReturn; } The contents of strReturn is an XML tag (<?xml version="1.0" encoding="utf-8"?>) followed by a raw dump of the contents of the original XML document, stripped of XML tags. What am I doing wrong here?

    Read the article

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Ninject 2 and MVC 2.0

    - by theouteredge
    I've updated a project to VS2010 and MVC2 from VS2008 and MVC1. I'm having problems with Ninject not finding controllers within Areas Here is my global.asax.cs file: namespace Website { // Note: For instructions on enabling IIS6 or IIS7 classic mode, // visit http://go.microsoft.com/?LinkId=9394801 public class MvcApplication : NinjectHttpApplication { public static StandardKernel NinjectKernel; public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Balance", "Balance/{action}/{month}/{year}", new { controller = "Balance", action = "Index", month = DateTime.Now.Month, year = DateTime.Now.Year } ); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Login", action = "Index", id = "" } // Parameter defaults ); } /* protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); // initializes the NHProfiler so you can see what is going on with your queries HibernatingRhinos.Profiler.Appender.NHibernate.NHibernateProfiler.Initialize(); } */ protected override void OnApplicationStarted() { RegisterRoutes(RouteTable.Routes); AreaRegistration.RegisterAllAreas(); RegisterAllControllersIn(Assembly.GetExecutingAssembly()); } protected void Application_Error(object sender, EventArgs e) { var errorService = NinjectKernel.Get<IErrorLogService>(); errorService.LogError(HttpContext.Current.Server.GetLastError().GetBaseException(), "AppSite"); } protected override IKernel CreateKernel() { if (NinjectKernel == null) { NinjectKernel = new StandardKernel(new ServiceModule()); } return NinjectKernel; } } public class ServiceModule : NinjectModule { public override void Load() { Bind<IHelper>().To<Helper>().InRequestScope(); Bind<IErrorLogService>().To<ErrorLogService>(); Bind<INHSessionFactory>().To<NHSessionFactory>().InSingletonScope(); Bind<ISessionFactory>().ToMethod(ctx => ctx.Kernel.Get<INHSessionFactory>().GetSessionFactory()) .InSingletonScope(); Bind<INHSession>().To<NHSession>(); Bind<ISession>().ToMethod(ctx => ctx.Kernel.Get<INHSession>().GetSession()); } } } Accessing controllers within the /Controllers folder works OK, but accessing controllers within a /Areas/Member/Controller throws the following error: Server Error in '/' Application. Cannot be null Parameter name: service Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentNullException: Cannot be null Parameter name: service Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ArgumentNullException: Cannot be null Parameter name: service] Ninject.ResolutionExtensions.GetResolutionIterator(IResolutionRoot root, Type service, Func`2 constraint, IEnumerable`1 parameters, Boolean isOptional, Boolean isUnique) +193 Ninject.Web.Mvc.NinjectControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +41 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +66 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +124 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +50 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state) +48 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) +16 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8771488 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 Version Information: Microsoft .NET Framework Version:4.0.30128; ASP.NET Version:4.0.30128.1 The Url for this request is /Member/Controller/, If I change the Url too /Controller the controller fires but I get an error that the system cannot find the View in the path /Views When it should be looking in /Area/Members/Views I have either done something wrong in the upgrade or I'm missing something bt I just can't figure out what. I've been trying to figure this out for 3 days...

    Read the article

  • PLPGSQL array assignment not working, "array subscript in assignment must not be null"

    - by Koen Schmeets
    Hello there, When assigning mobilenumbers to a varchar[] in a loop through results it gives me the following error: "array subscript in assignment must not be null" Also, i think the query that joins member uuids, and group member uuids, into one, grouped on the user_id, i think it can be done better, or maybe this is even why it is going wrong in the first place! Any help is very appreciated.. Thank you very much! CREATE OR REPLACE FUNCTION create_membermessage(in_company_uuid uuid, in_user_uuid uuid, in_destinationmemberuuids uuid[], in_destinationgroupuuids uuid[], in_title character varying, in_messagecontents character varying, in_timedelta interval, in_messagecosts numeric, OUT out_status integer, OUT out_status_description character varying, OUT out_value VARCHAR[], OUT out_trigger uuid[]) RETURNS record LANGUAGE plpgsql AS $$ DECLARE temp_count INTEGER; temp_costs NUMERIC; temp_balance NUMERIC; temp_campaign_uuid UUID; temp_record RECORD; temp_mobilenumbers VARCHAR[]; temp_destination_uuids UUID[]; temp_iterator INTEGER; BEGIN out_status := NULL; out_status_description := NULL; out_value := NULL; out_trigger := NULL; SELECT INTO temp_count COUNT(*) FROM costs WHERE costtype = 'MEMBERMESSAGE' AND company_uuid = in_company_uuid AND startdatetime < NOW() AND (enddatetime > NOW() OR enddatetime IS NULL); IF temp_count > 1 THEN out_status := 1; out_status_description := 'Invalid rows in costs table!'; RETURN; ELSEIF temp_count = 1 THEN SELECT INTO temp_costs costs FROM costs WHERE costtype = 'MEMBERMESSAGE' AND company_uuid = in_company_uuid AND startdatetime < NOW() AND (enddatetime > NOW() OR enddatetime IS NULL); ELSE SELECT INTO temp_costs costs FROM costs WHERE costtype = 'MEMBERMESSAGE' AND company_uuid IS NULL AND startdatetime < NOW() AND (enddatetime > NOW() OR enddatetime IS NULL); END IF; IF temp_costs != in_messagecosts THEN out_status := 2; out_status_description := 'Message costs have changed during sending of the message'; RETURN; ELSE SELECT INTO temp_balance balance FROM companies WHERE company_uuid = in_company_uuid; SELECT INTO temp_count COUNT(*) FROM users WHERE (user_uuid = ANY(in_destinationmemberuuids)) OR (user_uuid IN (SELECT user_uuid FROM targetgroupusers WHERE targetgroup_uuid = ANY(in_destinationgroupuuids)) ) GROUP BY user_uuid; temp_campaign_uuid := generate_uuid('campaigns', 'campaign_uuid'); INSERT INTO campaigns (company_uuid, campaign_uuid, title, senddatetime, startdatetime, enddatetime, messagetype, state, message) VALUES (in_company_uuid, temp_campaign_uuid, in_title, NOW() + in_timedelta, NOW() + in_timedelta, NOW() + in_timedelta, 'MEMBERMESSAGE', 'DRAFT', in_messagecontents); IF in_timedelta > '00:00:00' THEN ELSE IF temp_balance < (temp_costs * temp_count) THEN UPDATE campaigns SET state = 'INACTIVE' WHERE campaign_uuid = temp_campaign_uuid; out_status := 2; out_status_description := 'Insufficient balance'; RETURN; ELSE UPDATE campaigns SET state = 'ACTIVE' WHERE campaign_uuid = temp_campaign_uuid; UPDATE companies SET balance = (temp_balance - (temp_costs * temp_count)) WHERE company_uuid = in_company_uuid; SELECT INTO temp_destination_uuids array_agg(DISTINCT(user_uuid)) FROM users WHERE (user_uuid = ANY(in_destinationmemberuuids)) OR (user_uuid IN(SELECT user_uuid FROM targetgroupusers WHERE targetgroup_uuid = ANY(in_destinationgroupuuids))); RAISE NOTICE 'Array is %', temp_destination_uuids; FOR temp_record IN (SELECT u.firstname, m.mobilenumber FROM users AS u LEFT JOIN mobilenumbers AS m ON m.user_uuid = u.user_uuid WHERE u.user_uuid = ANY(temp_destination_uuids)) LOOP IF temp_record.mobilenumber IS NOT NULL AND temp_record.mobilenumber != '' THEN --THIS IS WHERE IT GOES WRONG temp_mobilenumbers[temp_iterator] := ARRAY[temp_record.firstname::VARCHAR, temp_record.mobilenumber::VARCHAR]; temp_iterator := temp_iterator + 1; END IF; END LOOP; out_status := 0; out_status_description := 'Message created successfully'; out_value := temp_mobilenumbers; RETURN; END IF; END IF; END IF; END$$;

    Read the article

  • C++ Compound Interest Exercise

    - by Lameste
    I'm a beginner trying to learn C++ using "C++ Primer Plus Sixth Edition". I'm on Chapter 5, going over loops. Anyways I was doing this programming exercise from the book, the problem is: Daphne invests $100 at 10% simple interest.That is, every year, the investment earns 10% of the original investment, or $10 each and every year: interest = 0.10 × original balance At the same time, Cleo invests $100 at 5% compound interest.That is, interest is 5% of the current balance, including previous additions of interest: interest = 0.05 × current balance Cleo earns 5% of $100 the first year, giving her $105.The next year she earns 5% of $105, or $5.25, and so on.Write a program that finds how many years it takes for the value of Cleo’s investment to exceed the value of Daphne’s investment and then displays the value of both investments at that time. Here is the code I have written for this exercise, I'm not getting good results though. #include <iostream> #include <array> double Daphne(int, double, double); double Chleo(double, double); int main() { using namespace std; int p = 100; //Principle double i1 = 0.1; // 10% interest rate double i2 = 0.05; // 5% interest rate double dInv = 0; //Daphnes investment double cInv = 0; // Chleos investment int t=1; //Starting at year 1 double s1 = 0; //Sum 1 for Daphne double s2 = 0; // Sum 2 for Chleo s1 = p + 10; //Initial interest (base case after year 1) for Daphne s2 = p + (i2*p); //Initial interest (base case after year 1) for Chleo /*cout << s1 << endl; cout << s2 << endl;*/ while (cInv < dInv) { dInv = Daphne(p, i1, s1); cInv = Chleo(i2, s2); t++; } cout << "The time taken for Chleos investment to exceed Daphnes was: " << t << endl; cout << "Daphnes investment at " << t << " years is: " << dInv << endl; cout << "Chleos invesment at " << t << " years is: " << cInv << endl; system("pause"); return 0; } double Daphne(int p, double i, double s1) { s1 = s1 + (p*i); return s1; } double Chleo(double i, double s2){ s2 = s2 + (s2*i); return s2; } Output from console: The time taken for Chleos investment to exceed Daphnes was: 1 Daphnes investment at 1 years is: 0 Chleos invesment at 1 years is: 0 Press any key to continue . . . Can anyone explain why I'm getting this current result? The while loop is supposed to continue executing statements until Chleo's investment exceeds Daphnes.

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Doing your first mock with JustMock

    - by mehfuzh
    In this post, i will start with a  more traditional mocking example that  includes a fund transfer scenario between two different currency account using JustMock.Our target interface that we will be mocking looks similar to: public interface ICurrencyService {     float GetConversionRate(string fromCurrency, string toCurrency); } Moving forward the SUT or class that will be consuming the  service and will be invoked by user [provided that the ICurrencyService will be passed in a DI style] looks like: public class AccountService : IAccountService         {             private readonly ICurrencyService currencyService;               public AccountService(ICurrencyService currencyService)             {                 this.currencyService = currencyService;             }               #region IAccountService Members               public void TransferFunds(Account from, Account to, float amount)             {                 from.Withdraw(amount);                 float conversionRate = currencyService.GetConversionRate(from.Currency, to.Currency);                 float convertedAmount = amount * conversionRate;                 to.Deposit(convertedAmount);             }               #endregion         }   As, we can see there is a TransferFunds action implemented from IAccountService  takes in a source account from where it withdraws some money and a target account to where the transfer takes place using the provided conversion rate. Our first step is to create the mock. The syntax for creating your instance mocks is pretty much same and  is valid for all interfaces, non-sealed/sealed concrete instance classes. You can pass in additional stuffs like whether its an strict mock or not, by default all the mocks in JustMock are loose, you can use it as default valued objects or stubs as well. ICurrencyService currencyService = Mock.Create<ICurrencyService>(); Using JustMock, setting up your expectations and asserting them always goes with Mock.Arrang|Assert and this is pretty much same syntax no matter what type of mocking you are doing. Therefore,  in the above scenario we want to make sure that the conversion rate always returns 2.20F when converting from GBP to CAD. To do so we need to arrange in the following way: Mock.Arrange(() => currencyService.GetConversionRate("GBP", "CAD")).Returns(2.20f).MustBeCalled(); Here, I have additionally marked the mock call as must. That means it should be invoked anywhere in the code before we do Mock.Assert, we can also assert mocks directly though lamda expressions  but the more general Mock.Assert(mocked) will assert only the setups that are marked as "MustBeCalled()”. Now, coming back to the main topic , as we setup the mock, now its time to act on it. Therefore, first we create our account service class and create our from and to accounts respectively. var accountService = new AccountService(currencyService);   var canadianAccount = new Account(0, "CAD"); var britishAccount = new Account(0, "GBP"); Next, we add some money to the GBP  account: britishAccount.Deposit(100); Finally, we do our transfer by the following: accountService.TransferFunds(britishAccount, canadianAccount, 100); Once, everything is completed, we need to make sure that things were as it is we have expected, so its time for assertions.Here, we first do the general assertions: Assert.Equal(0, britishAccount.Balance); Assert.Equal(220, canadianAccount.Balance); Following, we do our mock assertion,  as have marked the call as “MustBeCalled” it will make sure that our mock is actually invoked. Moreover, we can add filters like how many times our expected mock call has occurred that will be covered in coming posts. Mock.Assert(currencyService); So far, that actually concludes our  first  mock with JustMock and do stay tuned for more. Enjoy!!

    Read the article

  • EXCEL VBA STUDENTS DATABASE [on hold]

    - by BENTET
    I AM DEVELOPING AN EXCEL DATABASE TO RECORD STUDENTS DETAILS. THE HEADINGS OF THE TABLE ARE DATE,YEAR, PAYMENT SLIP NO.,STUDENT NUMBER,NAME,FEES,AMOUNT PAID, BALANCE AND PREVIOUS BALANCE. I HAVE BEEN ABLE TO PUT UP SOME CODE WHICH IS WORKING, BUT THERE ARE SOME SETBACKS THAT I WANT TO BE ADDRESSED.I ACTUALLY DEVELOPED A USERFORM FOR EACH PROGRAMME OF THE INSTITUTION AND ASSIGNED EACH TO A SPECIFIC SHEET BUT WHENEVER I ADD A RECORD, IT DOES NOT GO TO THE ASSIGNED SHEET BUT GOES TO THE ACTIVE SHEET.ALSO I WANT TO HIDE ALL SHEETS AND BE WORKING ONLY ON THE USERFORMS WHEN THE WORKBOOK IS OPENED.ONE PROBLEM AM ALSO FACING IS THE UPDATE CODE.WHENEVER I UPDATE A RECORD ON A SPECIFIC ROW, IT RATHER EDIT THE RECORD ON THE FIRST ROW NOT THE RECORD EDITED.THIS IS THE CODE I HAVE BUILT SO FAR.I AM VIRTUALLY A NOVICE IN PROGRAMMING. Private Sub cmdAdd_Click() Dim lastrow As Long lastrow = Sheets("Sheet4").Range("A" & Rows.Count).End(xlUp).Row Cells(lastrow + 1, "A").Value = txtDate.Text Cells(lastrow + 1, "B").Value = ComBox1.Text Cells(lastrow + 1, "C").Value = txtSlipNo.Text Cells(lastrow + 1, "D").Value = txtStudentNum.Text Cells(lastrow + 1, "E").Value = txtName.Text Cells(lastrow + 1, "F").Value = txtFees.Text Cells(lastrow + 1, "G").Value = txtAmountPaid.Text txtDate.Text = "" ComBox1.Text = "" txtSlipNo.Text = "" txtStudentNum.Text = "" txtName.Text = "" txtFees.Text = "" txtAmountPaid.Text = "" End Sub Private Sub cmdClear_Click() txtDate.Text = "" ComBox1.Text = "" txtSlipNo.Text = "" txtStudentNum.Text = "" txtName.Text = "" txtFees.Text = "" txtAmountPaid.Text = "" txtBalance.Text = "" End Sub Private Sub cmdClearD_Click() txtDate.Text = "" ComBox1.Text = "" txtSlipNo.Text = "" txtStudentNum.Text = "" txtName.Text = "" txtFees.Text = "" txtAmountPaid.Text = "" txtBalance.Text = "" End Sub Private Sub cmdClose_Click() Unload Me End Sub Private Sub cmdDelete_Click() 'declare the variables Dim findvalue As Range Dim cDelete As VbMsgBoxResult 'check for values If txtStudentNum.Value = "" Or txtName.Value = "" Or txtDate.Text = "" Or ComBox1.Text = "" Or txtSlipNo.Text = "" Or txtFees.Text = "" Or txtAmountPaid.Text = "" Or txtBalance.Text = "" Then MsgBox "There is not data to delete" Exit Sub End If 'give the user a chance to change their mind cDelete = MsgBox("Are you sure that you want to delete this student", vbYesNo + vbDefaultButton2, "Are you sure????") If cDelete = vbYes Then 'delete the row Set findvalue = Sheet4.Range("D:D").Find(What:=txtStudentNum, LookIn:=xlValues) findvalue.EntireRow.Delete End If 'clear the controls txtDate.Text = "" ComBox1.Text = "" txtSlipNo.Text = "" txtStudentNum.Text = "" txtName.Text = "" 'txtFees.Text = "" txtAmountPaid.Text = "" txtBalance.Text = "" End Sub Private Sub cmdSearch_Click() Dim lastrow As Long Dim currentrow As Long Dim studentnum As String lastrow = Sheets("Sheet4").Range("A" & Rows.Count).End(xlUp).Row studentnum = txtStudentNum.Text For currentrow = 2 To lastrow If Cells(currentrow, 4).Text = studentnum Then txtDate.Text = Cells(currentrow, 1) ComBox1.Text = Cells(currentrow, 2) txtSlipNo.Text = Cells(currentrow, 3) txtStudentNum.Text = Cells(currentrow, 4).Text txtName.Text = Cells(currentrow, 5) txtFees.Text = Cells(currentrow, 6) txtAmountPaid.Text = Cells(currentrow, 7) txtBalance.Text = Cells(currentrow, 8) End If Next currentrow txtStudentNum.SetFocus End Sub Private Sub cmdSearchName_Click() Dim lastrow As Long Dim currentrow As Long Dim studentname As String lastrow = Sheets("Sheet4").Range("A" & Rows.Count).End(xlUp).Row studentname = txtName.Text For currentrow = 2 To lastrow If Cells(currentrow, 5).Text = studentname Then txtDate.Text = Cells(currentrow, 1) ComBox1.Text = Cells(currentrow, 2) txtSlipNo.Text = Cells(currentrow, 3) txtStudentNum.Text = Cells(currentrow, 4) txtName.Text = Cells(currentrow, 5).Text txtFees.Text = Cells(currentrow, 6) txtAmountPaid.Text = Cells(currentrow, 7) txtBalance.Text = Cells(currentrow, 8) End If Next currentrow txtName.SetFocus End Sub Private Sub cmdUpdate_Click() Dim tdate As String Dim tlevel As String Dim tslipno As String Dim tstudentnum As String Dim tname As String Dim tfees As String Dim tamountpaid As String Dim currentrow As Long Dim lastrow As Long 'If Cells(currentrow, 5).Text = studentname Then 'txtDate.Text = Cells(currentrow, 1) lastrow = Sheets("Sheet4").Range("A" & Columns.Count).End(xlUp).Offset(0, 1).Column For currentrow = 2 To lastrow tdate = txtDate.Text Cells(currentrow, 1).Value = tdate txtDate.Text = Cells(currentrow, 1) tlevel = ComBox1.Text Cells(currentrow, 2).Value = tlevel ComBox1.Text = Cells(currentrow, 2) tslipno = txtSlipNo.Text Cells(currentrow, 3).Value = tslipno txtSlipNo = Cells(currentrow, 3) tstudentnum = txtStudentNum.Text Cells(currentrow, 4).Value = tstudentnum txtStudentNum.Text = Cells(currentrow, 4) tname = txtName.Text Cells(currentrow, 5).Value = tname txtName.Text = Cells(currentrow, 5) tfees = txtFees.Text Cells(currentrow, 6).Value = tfees txtFees.Text = Cells(currentrow, 6) tamountpaid = txtAmountPaid.Text Cells(currentrow, 7).Value = tamountpaid txtAmountPaid.Text = Cells(currentrow, 7) Next currentrow txtDate.SetFocus ComBox1.SetFocus txtSlipNo.SetFocus txtStudentNum.SetFocus txtName.SetFocus txtFees.SetFocus txtAmountPaid.SetFocus txtBalance.SetFocus End Sub PLEASE I WAS THINKING IF I CAN DEVELOP SOMETHING THAT WILL USE ONLY ONE USERFORM TO SEND DATA TO DIFFERENT SHEETS IN THE WORKBOOK.

    Read the article

  • Wpf Resource: "Unknown Build Error, 'Path cannot be null..."

    - by Femaref
    The following is a snippet from a xaml defining a DataGrid in a Control, defining a template selector. <DataGrid.Resources> <selector:CurrencyColorSelector x:Key="currencyColorSelector"> <selector:CurrencyColorSelector.NegativeTemplate> <DataTemplate> <TextBlock Text="{Binding Balance, StringFormat=n}" Background="Red"/> </DataTemplate> </selector:CurrencyColorSelector.NegativeTemplate> <selector:CurrencyColorSelector.NormalTemplate> <DataTemplate> <TextBlock Text="{Binding Balance, StringFormat=n}"/> </DataTemplate> </selector:CurrencyColorSelector.NormalTemplate> </selector:CurrencyColorSelector> </DataGrid.Resources> Now, an error is thrown: "Unknown build error, 'Path cannot be null. Parameter name: path Line 27 Position 79.'" (Compiler or xaml validation error). I have no idea where this Path comes from, neither does my example show anything of it. If you doubleclick the error, it points to the end of the first line. Did anybody encounter such a problem and has a solution for it? The example was from here: http://www.wpftutorial.net/DataGrid.html (Row Details depending on the type of data)

    Read the article

  • How do you keep your business rules DRY?

    - by Mario
    I periodically ponder how to best design an application whose every business rule exists in just a single location. (While I know there is no proverbial “best way” and that designs are situational, people must have a leaning toward one practice or another.) I work for a shop where they prefer to house as much of the business rules as possible in the database. This requires developers in many cases to perform identical front-end validations to avoid sending data to the database that will result in an exception—not very DRY. It grates me anytime I find myself duplicating any kind of logic—even lowly validation logic. I am a single-point-of-truth purist to an anal degree. On the other end of the spectrum, I know of shops that create dumb databases (the Rails community leans in this direction) and handle all of the business logic in a separate tier (in Rails the models would house “most” of this). Note the word “most” which implies that some business logic does end up spilling into other places (in Rails it might spill over into the controllers). In way, a clean separation of concerns where all business logic exists in a single core location is a Utopian fantasy that’s hard to uphold (n-tiered architecture or not). Furthermore, is see the “Database as a fortress” and would agree that it should be built on constraints that cause it to reject bad data. As such, I hold principles that cause a degree of angst as I attempt to balance them. How do you balance the database-as-a-fortress view with the desire to have a single-point-of-truth?

    Read the article

  • Anyone got a nifty credit expiry algorithm?

    - by garethkeenan
    Our website uses a credit system to allow users to purchase inexpensive digital goods (eg. photos). We use credits, rather than asking the user to pay for items individually, because the items are cheap and we are trying to keep our credit-card/PayPal overhead low. Because we aren't a bank, we have to expire credits after a certain amount of time. We expire deposit credits after a year, but other types of credits (bonuses, prizes, refunds) may have a different shelf-life. When a buyer buys an item, we spend the credit that is going to expire first. Our current system keeps track of every deposit by storing the original value and the remainder to be spent. We keep a list of all purchases as well, of course. I am currently moving to a system which is much more like a traditional double-entry accounting system. A deposit will create a ledger item, increasing the user's 'spending' account balance. Every purchase will also create a ledger item, decreasing the user's 'spending' account balance. The new system has running balances, while the old system does not, which greatly improves our ability to find problems and do reconciliations. We do not want to use the old system of keeping a 'remainder' value attached to each deposit record because it is inefficient to replay a user's activities to calculate what the remainder of each deposit is over time (for the user's statement). So, after all of this verbose introduction, my question is "Does anyone else out there have a similar system of expiring credits?" If you could describe how you calculate expired credits it would be a great help. If all expired credits had the exact same shelf life, we would be able to calculate the expired amount using: Total Deposits - Total Spending - Deposits Not Due To Expire = Amount to Expire However, because deposits can have different shelf lives, this formula does not work because more than one deposit can be partially spent at any given time.

    Read the article

  • What's it like being a financial programmer?

    - by Mike
    As a student who's done an internship at a Silicon Valley company(non-financial), I'm curious to know what it's like working for a financial company doing software development. I'd expect the hours to be longer, and the pay to be higher. Specifically, I have the following questions: What's the work/life balance really like? Are you expected to work 80 hours a week most weeks? For those who have worked in non-financial software engineering jobs, how does being a financial software engineer compare in terms of work/life balance? How much does it pay? I'm curious as to starting(i.e. just got a BS) pay, as well as "top out" pay. (I'd prefer concrete numbers - ballpark is fine). Also, bonuses would be useful information. What jobs do financial programmers typically have? Are most just general software engineers, or do people typically have very specialized(i.e. AI or systems) backgrounds? Also, do most programmers have PhDs? Are programmers typically required to be at work, or are financial companies generally flexible about letting programmers work from home? When at work, do programmers have to dress formally? What are the technology environments like? Are finance companies using state-of-the-art hardware and software, or are they generally more conservative in upgrading their equipment? What programming languages are typically used? If VBA(shudder) is used, is it a large part of a finance company's workflow? If you could turn back the clock, would you still be a financial programmer? I'm going to keep this post open a little bit longer to get some more responses.

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • Nokogiri HttParty Xpath Ruby on Rails

    - by Brian
    I am working with a mmorpg (Eve Online) request that returns xml. I am using httparty for the request and I am trying to use nokogiri to obtain attribute values for a specific element. Here's an example of the response: <eveapi version="2"><currentTime>2012-10-19 22:41:56</currentTime><result><rowset name="transactions" key="refID" columns="date,refID,refTypeID,ownerName1,ownerID1,ownerName2,ownerID2,argName1,argID1,amount,balance,reason,taxReceiverID,taxAmount"><row date="2012-10-18 23:41:50" refID="232323" refTypeID="9" ownerName1="University of Caille" ownerID1="32232" ownerName2="name" ownerID2="34343" argName1="" argID1="0" amount="5000.00" balance="5000.00" reason="Starter fund" taxReceiverID="" taxAmount=""/></rowset></result><cachedUntil>2012-10-19 23:03:40</cachedUntil></eveapi> I only need to access attributes for the element "row" and there can be many rows returned. I have read about xpath and from what I understand if I do the following it should return all rows: doc.xpath('row') however it does not return anything. Here's what I have so far: options = {:keyID => 111111, :vCode => 'fddfdfdfdf'} response = HTTParty.post('https://api.eveonline.com/char/WalletJournal.xml.aspx', :body => options) doc = Nokogiri::XML(response.body) doc.xpath('row').each do |r| end The loop is never executed. What am I doing wrong? I need to return all row elements and gain access to each of the row's attributes. Thanks.

    Read the article

  • How to restrain one's self from the overwhelming urge to rewrite everything?

    - by Scott Saad
    Setup Have you ever had the experience of going into a piece of code to make a seemingly simple change and then realizing that you've just stepped into a wasteland that deserves some serious attention? This usually gets followed up with an official FREAK OUT moment, where the overwhelming feeling of rewriting everything in sight starts to creep up. It's important to note that this bad code does not necessarily come from others as it may indeed be something we've written or contributed to in the past. Problem It's obvious that there is some serious code rot, horrible architecture, etc. that needs to be dealt with. The real problem, as it relates to this question, is that it's not the right time to rewrite the code. There could be many reasons for this: Currently in the middle of a release cycle, therefore any changes should be minimal. It's 2:00 AM in the morning, and the brain is starting to shut down. It could have seemingly adverse affects on the schedule. The rabbit hole could go much deeper than our eyes are able to see at this time. etc... Question So how should we balance the duty of continuously improving the code, while also being a responsible developer? How do we refrain from contributing to the broken window theory, while also being aware of actions and the potential recklessness they may cause? Update Great answers! For the most part, there seems to be two schools of thought: Don't resist the urge as it's a good one to have. Don't give in to the temptation as it will burn you to the ground. It would be interesting to know if more people feel any balance exists.

    Read the article

  • Working with a database

    - by Susan
    I have an existing program for a bank account, the user can create an account then withdraw or deposit money to the account. As a transaction is processed, labels are being used to show the current information such as (Beginning Balance, Transaction Fee, Withdrawal Amount, and Ending Balance). Now, I need to be able to keep track of the transactions being processed in a SQL database. I know how to add a database to the project and then set it up to display all of the data from the table in a gridview. This is assuming that I manually entered data into the table; however, the table should be blank when the program starts and as I process transactions, then the data should be written to the table. How do I bind my existing fields (labels) to a datatable and send the text to the table. The book that I have is all related to just displaying the data that is already in the table and I have been through a couple of tutorial online and they seem to be about the same subject. I haven't found anything on how to do what I am looking for. Can someone help me out here. I don't mind references to other websites that might have the answer. Thanks, Susan

    Read the article

  • MYOB Import "amount paid"

    - by php-b-grader
    I seem to have found an anomaly with MYOB (I've actually found many anomalies, this is just another one that is doing my head in...) I am generating a file with all invoices from the web system - no problems. If an invoice has 3 lines and the account is paid COD, I am having an problem e.g. "INV", "DATE" ... "AMOUNT", "INC TAX AMOUNT" ... "AMOUNT PAID" 8421, 12/06/2010 ... 60, 66 ... 66 8421, 12/06/2010 ... 120, 132 ... 132 8421, 12/06/2010 ... 96, 105.6 ... 105.6 8421, 12/06/2010 ... 84, 92.4 ... 92.4 When I import this file, the balance of the invoice is still outstanding and what it appears is the issue is that it is only importing the first line of "amount paid" ... so in other words, based on the above: Invoice 8421 is imported with 4 lines The total invoice amount is $396 The Amount paid (that is imported) is $66 The outstanding balance = $330 Surely the first line isn't expected to be: Inc tax Amount = $66 Amount Paid = $396 It seems completely illogical to me... am I doing something wrong or is MYOB just really bad?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >