Search Results

Search found 16132 results on 646 pages for 'john mark high'.

Page 69/646 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Group Policy is not being applied from Server 2003 to win7 client

    - by John Hoge
    Hi, I'm experimenting with Group Policy settings. My DC is running Server 2003, and the client I am using for this test is running Win7. I've restarted the client a few times, and tried running gpupdate/force for good measure. This machine is in it's own OU with a group policy applied to change one setting, Computer Configuration/Administrative Templates/Network/Offline Files. When I run MMC and look at Local Computer Policy on the client this setting shows up as "not configured". Thanks, John

    Read the article

  • Some web pages (especially Apple documentation) cause heavy CPU usage in Windows IE8

    - by Mark Lutton
    Maybe this belongs in Server Fault instead, but some of you may have noticed this issue (particularly those developing on Mac, using a Windows machine to read the reference material). I posted the same question on a Microsoft forum and got one answer from someone who reproduced the problem, so it's not just my machine. No solution yet. Ever since this month's security updates, I find that many web pages cause the CPU to run at maximum for as long as the web page is visible. This happens in both IE7 and IE8 on at least three different computers (two with Windows XP, one with Vista). Here is one of the pages, running on XP with IE 8: http://learning2code.blogspot.com/2007/11/update-using-subversion-with-xcode-3.html Here is one that does it in Vista with IE8: http://developer.apple.com/iphone/library/documentation/Cocoa/Reference/Foundation/Classes/NSString_Class/Reference/NSString.html You can leave the page open for hours and the CPU is still at high usage. This doesn't happen every time. It is not always reproduceable. Sometimes it is OK the second or third time it loads. In IE7 the high usage is in ieframe.dll, version 7.0.6000.16890. In IE8 the high usage is in iertutil.dll, version 8.0.6001.18806.

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • a question on common lisp

    - by kostas
    Hello people, I'm getting crazy with a small problem here, I keep getting an error and I cant seem to figure out why, the code is supposed to change the range of a list, so if we give it a list with values (1 2 3 4) and we want to change the range in 11 to fourteen the result would be (11 12 13 14) the problem is that the last function called scale-list will give back an error saying: Debugger entered--Lisp error: (wrong-type-argument number-or-marker-p nil) anybody has a clue why? I use aquamacs as an editor thanks in advance ;;finds minimum in a list (defun minimum(list) (car (sort list #'<))) ;;finds maximum in a list (defun maximum(list) (car (sort list #'>))) ;;calculates the range of a list (defun range(list) (- (maximum list) (minimum list))) ;;this codes scales a value from a list (defun scale-value(list low high n) (+ (/ (* (- (nth (- n 1) list) (minimum list)) (- high low)) (range list)) low)) ;and this code is supposed to scale the whole list (defun scale-list(list low high n) (unless (= n 0) (cons (scale-value list low high n) (scale-list list low high (- n 1))))) (scale-list '(0.1 0.3 0.5 0.9) 20 30 4)

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Problem with a recursive function to find sqrt of a number

    - by Eternal Learner
    Below is a simple program which computes sqrt of a number using Bisection. While executing this with a call like sqrtr(4,1,4) in goes into an endless recursion . I am unable to figure out why this is happening. Below is the function : double sqrtr(double N , double Low ,double High ) { double value = 0.00; double mid = (Low + High + 1)/2; if(Low == High) { value = High; } else if (N < mid * mid ) { value = sqrtr(N,Low,mid-1) ; } else if(N >= mid * mid) { value = sqrtr(N,mid,High) ; } return value; }

    Read the article

  • "...redeclared as different kind of symbol"?

    - by CodeNewb
    #include <stdio.h> #include <math.h> double integrateF(double low, double high) { double low = 0; double high = 20; double delta_x=0; double x, ans; double s = 1/2*exp((-x*x)/2); for(x=low;x<=high;x++) delta_x = x+delta_x; ans = delta_x*s; return ans; } It says that low and high are "redeclared as different type of symbol" and I don't know what that means. Basically, all I'm doing here (READ: trying) is integrating from low (which I set to 0) to high (20) to find the Riemann sum. The for loop looks kinda trippy too...I'm so lost.

    Read the article

  • Step by Step Install of MAAS and JUJU

    - by John S
    I am working on understanding the pieces that I am missing in being able to deploy Juju across the other MAAS nodes. I don't know If I have a step out of place, or missing a few. The server owns the router which handles the DHCP and DNS. Any assistance is greatly appreciated. When I am at the end I will either get a 409 error, or arbitrary pick tools 1.16.0 error. It is worth mentioning that local, and aws works fine. Hopefully with all of these steps spelled out it will help someone else along the way too. Steps Setting Up MAAS and JUJU - 12.04 LTS Clean install SSH only from the package selection during install sudo apt-get install software-properties-common sudo apt-get install python-software-properties sudo add-apt-repository ppa:maas-maintainers/stable sudo add-apt-repository ppa:juju/stable sudo apt-get update sudo apt-get dist-upgrade sudo reboot sudo apt-get install maas maas-dns maas-dhcp sudo ufw disable sudo reboot - edit /etc/dhcp/dhcpd.conf authoritive subnet 10.0.0.0 netmask 255.255.255.0 { next-server 10.0.0.2; filename "pxelinux.0"; } sudo maas createsuperuser sudo maas-import-pxe-files Login to MAAS http://10.x.x.x/MAAS cluster controller configuration for eth0 manage dhcp and dns IP 10.0.0.2 subnet 255.255.255.0 broadcast 10.0.0.0 routerip 10.0.0.1 ip low 10.0.0.5 ip high 10.0.0.180 Commissioning default and distro is set at 12.04 default domain is at local sudo maas-cli login maas http://10.x.x.x/MAAS/api/1.0 api-key ssh-keygen -t rsa -b 2048 - enter - no password - cat id_rsa.pub and enter key into MAAS ssh sudo maas-cli maas nodes accept-all (interestingly enough I only get back [] when executing this ) PXE one machine, accept and commision, start and deploy. sudo apt-get install juju-core juju-local MAAS config: maas: type: maas maas-server: '://10.x.x.x:80/MAAS' maas-oauth: 'MAAS_API_KEY' admin-secret: 'nothing' default-series: 'precise' juju switch maas sudo juju bootstrap --show-log

    Read the article

  • There is No Scrum without Agile

    - by John K. Hines
    It's been interesting for me to dive a little deeper into Scrum after realizing how fragile its adoption can be.  I've been particularly impressed with James Shore's essay "Kaizen and Kaikaku" and the Net Objectives post "There are Better Alternatives to Scrum" by Alan Shalloway.  The bottom line: You can't execute Scrum well without being Agile. Personally, I'm the rare developer who has an interest in project management.  I think the methodology to deliver software is interesting, and that there are many roles whose job exists to make software development easier.  As a project lead I've seen Scrum deliver for disciplined, highly motivated teams with solid engineering practices.  It definitely made my job an order of magnitude easier.  As a developer I've experienced huge rewards from having a well-defined pipeline of tasks that were consistently delivered with high quality in short iterations.  In both of these cases Scrum was an addition to a fundamentally solid process and a huge benefit to the team. The question I'm now facing is how Scrum fits into organizations withot solid engineering practices.  The trend that concerns me is one of Scrum being mandated as the single development process across teams where it may not apply.  And we have to realize that Scurm itself isn't even a development process.  This is what worries me the most - the assumption that Scrum on its own increases developer efficiency when it is essentially an exercise in project management. Jim's essay quotes Tobias Mayer writing, "Scrum is a framework for surfacing organizational dysfunction."  I'm unsure whether a Vice President of Software Development wants to hear that, reality nonwithstanding.  Our Scrum adoption has surfaced a great deal of dysfunction, but I feel the original assumption was that we would experience increased efficiency.  It's starting to feel like a blended approach - Agile/XP techniques for developers, Scrum for project managers - may be a better fit.  Or at least, a better way of framing the conversation. The blended approach. Technorati tags: Agile Scrum

    Read the article

  • JEOS

    - by john.graves(at)oracle.com
    JEOS stands for Just Enough Operating System.  It is  great environment for building virtual machines without all the clutter of a windowing system, games, office products, etc.  It is from Ubuntu and you install it using the Ubuntu server install, but rather than picking a standard install, press F4 and choose “Install a minimal system.” Note: The “Install a minimal virtual machine” is specific to VMWare and I plan to use VirtualBox. Be sure to include Open SSH in the install so that it installs sshd. *** Also, if you plan to install XE, you’ll need to modify the partitions to have a larger swap space (at least 1.5 G). *** Once the install is done, I find it useful to install a few other items. Update Ubuntu apt-get update Install some other tools apt-get openjdk-6-jre Yes, java will be included in any of the WebLogic installs, but I need this one if I want to do remote display (for config wizards, etc). apt-get gcc Some apps require to rebuild the kernel modules, so you’ll need a basic compiler. Install guest additions (Choose the VirtualBox Devices->Install Guest Additions…” option.  This sets up a /dev/cdrom or /dev/cdrom1.  You’ll need to manually mount this temporarily: sudo mount /dev/cdrom /mnt Then run the linux .bin file. Update nofile limits.  Most java apps fail with the standard ubuntu settings: edit /etc/security/limits.conf and add these lines at the end: *     soft nofile 65535 *     hard nofile 65535 root  soft nofile 65535 root  hard nofile 65535 These numbers are very high and I wouldn’t do this on a production system, but for this environment it is fin. To get rid of the annoying piix error on boot, add the following line to the /etc/modprobe.d/blacklist.conf file blacklist i2c_piix4

    Read the article

  • How to set incremental CSS classes in each Table Cell with jQuery?

    - by Mark Rapp
    I have a table populated via a DB and it renders like so (it could have any number of columns referring to "time", 5 columns, 8 columns, 2 columns, etc): <table id="eventInfo"> <tr> <td class="name">John</td> <td class="date">Dec 20</td> <td class="**time**">2pm</td> <td class="**time**">3pm</td> <td class="**time**">4pm</td> <td class="event">Birthday</td> </tr> <tr> <td class="name">Billy</td> <td class="date">Dec 19</td> <td class="**time**">6pm</td> <td class="**time**">7pm</td> <td class="**time**">8pm</td> <td class="event">Birthday</td> </tr> With jQuery, I'd like to go through each Table Row and incrementally set an additional class-name on only the Table Cells where "class='time'" so that the result would be: John Dec 20 2pm 3pm 4pm Birthday Billy Dec 19 6pm 7pm 8pm Birthday I've only been able to get it to count all of the Table Cells where "class='time'" and not each set within its own Table Row. This is what I've tried with jQuery: $(document).ready(function() { $("table#eventInfo tr").each(function() { var tcount = 0; $("td.time").attr("class", function() { return "timenum-" + tcount++; }) //writes out the results in each TD .each(function() { $("span", this).html("(class = '<b>" + this.className + "</b>')"); }); }); }); Unfortunately, this only results in: <table id="eventInfo"> <tr> <td class="name">John</td> <td class="date">Dec 20</td> <td class="**time** **timenum-1**">2pm</td> <td class="**time** **timenum-2**">3pm</td> <td class="**time** **timenum-3**">4pm</td> <td class="event">Birthday</td> </tr> <tr> <td class="name">Billy</td> <td class="date">Dec 19</td> <td class="**time** **timenum-4**">6pm</td> <td class="**time** **timenum-5**">7pm</td> <td class="**time** **timenum-6**">8pm</td> <td class="event">Birthday</td> </tr> Thanks for your help!

    Read the article

  • Merging .net object graph

    - by Tiju John
    Hi guys, has anyone come across any scenario wherein you needed to merge one object with another object of same type, merging the complete object graph. for e.g. If i have a person object and one person object is having first name and other the last name, some way to merge both the objects into a single object. public class Person { public Int32 Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } } public class MyClass { //both instances refer to the same person, probably coming from different sources Person obj1 = new Person(); obj1.Id=1; obj1.FirstName = "Tiju"; Person obj2 = new Person(); ojb2.Id=1; obj2.LastName = "John"; //some way of merging both the object obj1.MergeObject(obj2); //?? //obj1.Id // = 1 //obj1.FirstName // = "Tiju" //obj1.LastName // = "John" } I had come across such type of requirement and I wrote an extension method to do the same. public static class ExtensionMethods { private const string Key = "Id"; public static IList MergeList(this IList source, IList target) { Dictionary itemData = new Dictionary(); //fill the dictionary for existing list string temp = null; foreach (object item in source) { temp = GetKeyOfRecord(item); if (!String.IsNullOrEmpty(temp)) itemData[temp] = item; } //if the same id exists, merge the object, otherwise add to the existing list. foreach (object item in target) { temp = GetKeyOfRecord(item); if (!String.IsNullOrEmpty(temp) && itemData.ContainsKey(temp)) itemData[temp].MergeObject(item); else source.Add(item); } return source; } private static string GetKeyOfRecord(object o) { string keyValue = null; Type pointType = o.GetType(); if (pointType != null) foreach (PropertyInfo propertyItem in pointType.GetProperties()) { if (propertyItem.Name == Key) { keyValue = (string)propertyItem.GetValue(o, null); } } return keyValue; } public static object MergeObject(this object source, object target) { if (source != null && target != null) { Type typeSource = source.GetType(); Type typeTarget = target.GetType(); //if both types are same, try to merge if (typeSource != null && typeTarget != null && typeSource.FullName == typeTarget.FullName) if (typeSource.IsClass && !typeSource.Namespace.Equals("System", StringComparison.InvariantCulture)) { PropertyInfo[] propertyList = typeSource.GetProperties(); for (int index = 0; index < propertyList.Length; index++) { Type tempPropertySourceValueType = null; object tempPropertySourceValue = null; Type tempPropertyTargetValueType = null; object tempPropertyTargetValue = null; //get rid of indexers if (propertyList[index].GetIndexParameters().Length == 0) { tempPropertySourceValue = propertyList[index].GetValue(source, null); tempPropertyTargetValue = propertyList[index].GetValue(target, null); } if (tempPropertySourceValue != null) tempPropertySourceValueType = tempPropertySourceValue.GetType(); if (tempPropertyTargetValue != null) tempPropertyTargetValueType = tempPropertyTargetValue.GetType(); //if the property is a list IList ilistSource = tempPropertySourceValue as IList; IList ilistTarget = tempPropertyTargetValue as IList; if (ilistSource != null || ilistTarget != null) { if (ilistSource != null) ilistSource.MergeList(ilistTarget); else propertyList[index].SetValue(source, ilistTarget, null); } //if the property is a Dto else if (tempPropertySourceValue != null || tempPropertyTargetValue != null) { if (tempPropertySourceValue != null) tempPropertySourceValue.MergeObject(tempPropertyTargetValue); else propertyList[index].SetValue(source, tempPropertyTargetValue, null); } } } } return source; } } However, this works when the source property is null, if target has it, it will copy that to source. IT can still be improved to merge when inconsistencies are there e.g. if FirstName="Tiju" and FirstName="John" Any commments appreciated. Thanks TJ

    Read the article

  • Application Demos in UPK

    - by [email protected]
    Over the years, User Productivity Kit has expanded to include solutions to many project challenges. As of UPK 3.6.1, solutions are provided for pre and post application go-live learning, application testing, system documentation, presentation output, and more. New in UPK 3.6.1 are additional features that can be used effectively for application demo purposes. This can come in handy when you need to do a demo but don't want to show or can't show the live application. Maybe you're doing a presentation for a group of project stakeholders and want to focus on the business workflow implemented by the application rather than the mechanics of using it. Or possibly, you need to show the application but you're disconnected from any network preventing you from running the live application. In any of these cases, a presentation aid that represents the live application is what's needed. Previous versions of the UPK topic player would allow you to do this but would always show those UPK user interface elements that help a user learn the application. When you're presenting the narrative live, the UPK bubbles can be a distraction. UPK 3.6.1 provides some new features that allow you to control whether the bubbles display. There are two ways to hide bubbles in a topic. The first is a topic property that allows you to control bubbles across the entire topic. There are 3 settings for the Show Bubbles topic property. The default setting is Use frame settings which allows you to control whether bubbles display on a frame by frame basis. When you choose Always, the bubbles will always display regardless of the frame setting. The final choice is Never. Choosing Never will hide every bubble in your topic with one setting change. As with Always, choosing Never will ignore the frame setting. The second way to control the bubbles is at the frame level. First ensure that the topic's Show Bubbles property is set to Use frame settings. Navigate to the frame on which you want to turn off the bubble and click the Display bubble for this frame button to turn off the bubble. When you play the topic, the bubble will no longer be displayed. Depending on your needs, you might also use another longstanding UPK feature that allows you to control whether the action area displays on a frame. Just click the Action area on/off button to toggle its display. I've found the frame properties to be useful beyond creating presentation aids. When creating "See It!" only topics for more advanced users, I may hide the bubbles on some of the more straightforward frames. For example, if I have a form where one needs to fill out an address, I may display the first bubble in the sequence and explain what the subsequent steps are doing. I then hide bubbles on the remaining frames which are the more mechanical steps of entering the address. We'd like to hear your thoughts on this new UPK feature. Use the comments below to tell us how you've used it. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or Mark[email protected].

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by [email protected]
    By ken.pulverman on March 24, 2010 8:51 AM ERP has been the backbone of enterprise software. The data held in your ERP system is core of most companies. Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years. Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor. Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough." So your ERP is working. It's humming along. You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere. Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on. And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™. SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications. Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise. In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure that Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution. It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do. The diagram below reflects that change. In this model, the hegemony of ERP is over. It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information. And through modern middleware it will connect to everything. So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • Essential Links for the SharePoint Client Side Developer

    - by Mark Rackley
    Front End Developer? Client Side Developer? Middle Tier??? I’m covering all my bases.  Regardless, I’m sick and tired of Googling with Bing when I forget where information that I need often is located. I was getting ready to bookmark some of them when it hit me… “Hey Mark… (I don’t actually refer to myself in the third person), Why don’t you put the links in a blog so that it looks like you are being helpful!” I can’t tell you how many times I’ve had to go back to some of my old blogs to remember how I did something. Seriously people, you need to start a blog, it’s the best way to remember how the frick you got something to work… and it looks like you are being helpful when in reality you are just forgetful.  So… where was I? Oh yeah.. essential information that I’ve needed from time to time when I was not using Visual Studio. All of this info has come in handy from time to time. Know about these things and keep them in your tool belt, it’s amazing the stuff you can accomplish with just knowing where to look. What Why SPServices Widely used library written by Marc Anderson used to call SharePoint Web Services with jQuery jQuery For SPServices and other cool stuff Easy Tabs Essential tool for quick page enhancements. This widely used too from Christophe Humbert groups multiple web parts into one tabbed display. Very quick and easy way to get oohs and ahs from End Users. Convert Calculated Columns to HTML Also from Christophe, I use this script all the time to convert html in my calculated columns to actually display as html and not with the tags. Unlocking the Mysteries of Data View Web Part XSL Tags This blog series from Marc Anderson makes it very easy to understand what’s going on with all those weird xsl tags in your data view web parts. Essential to make those things do what you want them to do. Creating Parent / Child list relationships (2007) Creating Parent / Child list relationships (2010) By far my most viewed blog posts (tens and tens of thousands).  I have posts for both 2007 and 2010 that walk you through automatically setting the lookup id on a list to its “parent”. Set SharePoint Form fields using Query String Variables Also widely read, this one walks you through taking a variable from your Query String and set a form field to that value.   Hmmm… I KNOW there are more, but I’m tired and drawing a blank.  I’ll try to add them when I remember them (or need them again and think “Oh, I forgot to add that one”) But it’s a start, and please feel free to add your own in the comments… So, it’s YOUR turn to be helpful. What little tip or trick do you find yourself using ALL the time that you think everyone should know about??

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Pantech Link II, Ubuntu and Virtual XP

    - by user85041
    Okay this is my problem. I have a Pantech Link II, dmesg states: [ 896.072037] usb 2-3: new high-speed USB device number 3 using ehci_hcd [ 896.258562] cdc_acm 2-3:1.0: ttyACM0: USB ACM device [ 896.260039] usbcore: registered new interface driver cdc_acm [ 896.260042] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters Have it installed through wine (pc suite and driver) and it doesn't see it. Virtual XP through VMWare Player sees my device, knows it needs a driver. The removable devices says Curitel Pantech USB Device (Maybe Driver). I have PC Suite installed in XP, I install the driver through the executable.. it says problem with installing hardware, and then it disappears. Ubuntu sees it after restart, but if I start XP with that driver installed, it disappears from both and I get these errors in dmesg: [ 1047.760555] /dev/vmmon[2882]: PTSC: initialized at 3093322000 Hz using TSC, TSCs are synchronized. [ 1048.174033] /dev/vmmon[2882]: Monitor IPI vector: 0 [ 1055.293060] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1055.293074] /dev/vmnet: port on hub 8 successfully opened [ 1055.293088] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1055.293094] /dev/vmnet: port on hub 8 successfully opened [ 1072.446305] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1072.446316] /dev/vmnet: port on hub 8 successfully opened [ 1072.446328] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1072.446334] /dev/vmnet: port on hub 8 successfully opened [ 1072.856024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1079.292024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1079.732024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1127.743034] NET: Registered protocol family 39 [ 1127.749320] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_ALLOC (cid=1522210225,result=4). [ 1144.104031] usb 2-3: reset high-speed USB device number 3 using ehci_hcd [ 1144.412031] usb 2-3: reset high-speed USB device number 3 using ehci_hcd [ 1155.889976] ehci_hcd 0000:00:13.2: force halt; handshake ffffc90000642024 00004000 00000000 -> -110 [ 1155.889980] ehci_hcd 0000:00:13.2: HC died; cleaning up [ 1155.890008] usb 2-3: USB disconnect, device number 3 [ 1155.890013] usb 2-3: usbfs: usb_submit_urb returned -110 [ 1658.310777] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_DETACH (cid=1522210225,result=3). [ 1658.392018] NET: Unregistered protocol family 39 [ 1666.546438] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546450] /dev/vmnet: port on hub 8 successfully opened [ 1666.546462] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546467] /dev/vmnet: port on hub 8 successfully opened [ 1671.431383] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1871:0101) [ 1671.432533] input: USB2.0 Camera as /devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/input/input13 lessa@X:~$ dmesg|tail [ 1155.890008] usb 2-3: USB disconnect, device number 3 [ 1155.890013] usb 2-3: usbfs: usb_submit_urb returned -110 [ 1658.310777] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_DETACH (cid=1522210225,result=3). [ 1658.392018] NET: Unregistered protocol family 39 [ 1666.546438] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546450] /dev/vmnet: port on hub 8 successfully opened [ 1666.546462] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546467] /dev/vmnet: port on hub 8 successfully opened [ 1671.431383] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1871:0101) [ 1671.432533] input: USB2.0 Camera as /devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/input/input13 I have tried uninstalling, and installing manually from the device manager update driver while it's still has the warning sign.. it doesn't see the drivers as valid. No idea how to fix this.. would prefer to not have to go to another computer. I'm not trying to do anything but get the pictures off of it. I have to restart ubuntu, plug in device, for ubuntu to see it correctly again. I am like a month and a half old linux newbie so I have no idea the commands I could use for this, and I don't have a memory card in the phone to mount.

    Read the article

  • ipvsadm lists a few hosts by IP only, rest by name

    - by dmourati
    We use keepalived to manage our Linux Virtual Server (LVS) load balancer. The LVS VIPs are setup to use a FWMARK as configured in iptables. virtual_server fwmark 300000 { delay_loop 10 lb_algo wrr lb_kind NAT persistence_timeout 180 protocol TCP real_server 10.10.35.31 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.31" misc_timeout 30 } } real_server 10.10.35.32 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.32" misc_timeout 30 } } real_server 10.10.35.33 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.33" misc_timeout 30 } } real_server 10.10.35.34 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.34" misc_timeout 30 } } } http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.fwmark.html [root@lb1 ~]# iptables -L -n -v -t mangle Chain PREROUTING (policy ACCEPT 182G packets, 114T bytes) 190M 167G MARK tcp -- * * 0.0.0.0/0 w1.x1.y1.4 multiport dports 80,443 MARK set 0x493e0 62M 58G MARK tcp -- * * 0.0.0.0/0 w1.x1.y2.4 multiport dports 80,443 MARK set 0x493e0 [root@lb1 ~]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 300000 wrr persistent 180 -> 10.10.35.31:0 Masq 24 1 0 -> dis2.domain.com:0 Masq 24 3 231 -> 10.10.35.33:0 Masq 24 0 208 -> 10.10.35.34:0 Masq 24 0 0 At the time the realservers were setup, there was a misconfigured dns for some hosts in the 10.10.35.0/24 network. Thereafter, we fixed the DNS. However, the hosts continue to show up as only their IP numbers (10.10.35.31,10.10.35.33,10.10.35.34) above. [root@lb1 ~]# host 10.10.35.31 31.35.10.10.in-addr.arpa domain name pointer dis1.domain.com. OS is CentOS 6.3. Ipvsadm is ipvsadm-1.25-10.el6.x86_64. kernel is kernel-2.6.32-71.el6.x86_64. Keepalived is keepalived-1.2.7-1.el6.x86_64. How can we get ipvsadm -L to list all realservers by their proper hostnames?

    Read the article

  • How to count the most recent value based on multiple criteria?

    - by Andrew
    I keep a log of phone calls like the following where the F column is LVM = Left Voice Mail, U = Unsuccessful, S = Successful. A1 1 B1 Smith C1 John D1 11/21/2012 E1 8:00 AM F1 LVM A2 2 B2 Smith C2 John D2 11/22/2012 E1 8:15 AM F2 U A3 3 B3 Harvey C3 Luke D3 11/22/2012 E1 8:30 AM F3 S A4 4 B4 Smith C4 John D4 11/22/2012 E1 9:00 AM F4 S A5 5 B5 Smith C5 John D5 11/23/2012 E5 8:00 AM F5 LVM This is a small sample. I actually have over 700 entries. In my line of work, it is important to know how many unsuccessful (LVM or U) calls I have made since the last Successful one (S). Since values in the F column can repeat, I need to take into consideration both the B and C column. Also, since I can make a successful call with a client and then be trying to contact them again, I need to be able to count from the last successful call. My G column is completely open which is where I would like to put a running total for each client (G5 would = 1 ideally while G4 = 0, G3 = 0, G2 = 2, G1 = 1 but I want these values calculated automatically so that I do not have scroll through 700 names).

    Read the article

  • OpenLDAP Authentication UID vs CN issues

    - by user145457
    I'm having trouble authenticating services using uid for authentication, which I thought was the standard method for authentication on the user. So basically, my users are added in ldap like this: # jsmith, Users, example.com dn: uid=jsmith,ou=Users,dc=example,dc=com uidNumber: 10003 loginShell: /bin/bash sn: Smith mail: [email protected] homeDirectory: /home/jsmith displayName: John Smith givenName: John uid: jsmith gecos: John Smith gidNumber: 10000 cn: John Smith title: System Administrator But when I try to authenticate using typical webapps or services like this: jsmith password I get: ldapsearch -x -h ldap.example.com -D "cn=jsmith,ou=Users,dc=example,dc=com" -W -b "dc=example,dc=com" Enter LDAP Password: ldap_bind: Invalid credentials (49) But if I use: ldapsearch -x -h ldap.example.com -D "uid=jsmith,ou=Users,dc=example,dc=com" -W -b "dc=example,dc=com" It works. HOWEVER...most webapps and authentication methods seem to use another method. So on a webapp I'm using, unless I specify the user as: uid=smith,ou=users,dc=example,dc=com Nothing works. In the webapp I just need users to put: jsmith in the user field. Keep in mind my ldap is using the "new" cn=config method of storing settings. So if someone has an obvious ldif I'm missing please provide. Let me know if you need further info. This is openldap on ubuntu 12.04. Thanks, Dave

    Read the article

  • How to Programmatically Split Data Using VBA Using Specific Logic

    - by Charlene
    This is an addition to my previous post here. The code that was previously supplied to me worked like a charm, but I am having issues modifying it adding some additional logic. I am creating a macro in VBA to do the following. I have raw order data that I need to transform based on some logic. Raw Data: order-id product-num date buyer-name prod-name qty-purc sales-tax freight order-st 0000000000-00 10000000000000 5/29/2014 John Doe Product 0 1 1.00 1.50 GA 0000000000-00 10000000000001 5/29/2014 John Doe Product 1 2 1.00 1.50 GA 0000000000-00 10000000000002 5/29/2014 John Doe Product 2 1 1.00 2.00 GA 0000000000-01 10000000000002 5/30/2014 Jane Doe Product 2 1 0.00 0.00 PA 0000000000-01 10000000000003 5/30/2014 Jane Doe Product 3 1 0.00 0.00 PA Desired Outcome: HDR 0000000000-00 John Doe 5/29/2014 CHG Tax 3.00 CHG Freight 5.00 ITM 10000000000000 Product 0 1 ITM 10000000000001 Product 1 2 ITM 10000000000002 Product 2 1 HDR 0000000000-01 Jane Doe 5/30/2014 ITM 10000000000002 Product 2 1 ITM 10000000000003 Product 3 1 The "CHG" rows are created based on the following logic; if the order-st is CA or GA, add the total of sales-tax and freight for each of the rows with the same order-id. If the order-st is NOT CA or GA, no CHG rows should be created. Any help would be appreciated - let me know if I left any details out!

    Read the article

  • How to Programmatically Split and Manipulate Rows of Data From Excel

    - by Charlene
    I am hoping one of you will be able to help get me started on this issue. I need to create some sort of macro or VBA code to split and manipulate rows of data in Excel. For this example, we have 5 rows of data. The first 3 rows are item information for Order # 0000000000-00 and the last 2 rows are item information for order # 0000000000-01. I need one row ("HDR") for each order number, and one row ("ITM") for each product per order. I have included an example below showing the data I will receive and the desired outcome. Raw Data: order-id product-num date buyer-name product-name quantity-purchased 0000000000-00 10000000000000 5/29/2014 John Doe Product 0 1 0000000000-00 10000000000001 5/29/2014 John Doe Product 1 2 0000000000-00 10000000000002 5/29/2014 John Doe Product 2 1 0000000000-01 10000000000002 5/30/2014 Jane Doe Product 2 1 0000000000-01 10000000000003 5/30/2014 Jane Doe Product 3 1 Desired Outcome: HDR 0000000000-00 John Doe 5/29/2014 ITM 10000000000000 Product 0 1 ITM 10000000000001 Product 1 2 ITM 10000000000002 Product 2 1 HDR 0000000000-01 Jane Doe 5/30/2014 ITM 10000000000002 Product 2 1 ITM 10000000000003 Product 3 1 Any and all help would be much appreciated!!! Thank you.

    Read the article

  • Excel 2007 pivot table does not aggregate properly

    - by Patrick
    I am using a an excel pivot table to summarize some data and just found a problem. The problem deals with how aggregate values are calculated. Let's say I have a table of data with three columns: Name, Date, Value. If I create a table where Name and then Date are used as Row Labels and Value is the aggregate value, ie Average. The pivot table will look something like this: +John .3450 5/14/2010 1.234 5/15/2010 3.450 5/16/2010 -3.25 What I think should be happening here is that the values for each date are averaged and then those values are averaged to come up with the value in the same row as the Name, John. But that is not what it does. It takes the average for each date, which it shows across from the date, but then instead of taking the average of those numbers, it actually uses the raw data and computes the average for all of John's values. It should show the average of the daily averages to correspond with the tree hierarchy, but instead just shows me the average for all of John's values. It essential will only aggregate at one level, but visually creates sub levels that it is not using. Does anyone know how to change this or understand by what logic this makes sense? Why would I create any sub groupings if I cannot compute aggregates on them?

    Read the article

  • Excel 2007 - Adding line breaks in a cell and no line over 50 characters

    - by Richard Drew
    I have notes stored in an excel cell. I add line breaks and dates every time I add a new note. I need to copy this to another program, but it has a line limit of 50 characters. I want a line break for each new date and for when each date's comment goes over 50 characters. I'm able to do one or the other, but I can't figure out how to do both. I'd prefer words not to be split up, but at this point I don't care. Below is some sample input. If needed for a SUBSTITUTE or REPLACE function, I could add a ~ before each date in my input as a delimiter. Sample Input: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates and locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] 05/14 - Copy sent to John Public and [email protected] Ideal Output: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates an d locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] om 05/14 - Copy sent to John Public and email@custome r.com

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >