Search Results

Search found 27181 results on 1088 pages for 'oracle desktop virtualization'.

Page 432/1088 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • IHRIM's Latest Workforce Solutions Review Focuses on Risk!

    - by Jay Richey, HCM Product Marketing
    IHRIM's latest edition of the Workforce Solution's Review magazine (in print and online) has some really compelling features and articles focused on HCM risk and compliance management.  Check out this line-up and sign up if you aren't already a member.  It's well worth it.  http://www.ihrimpublications.com/WSR_about.php Three to Watch: HR's Growing Compliance Responsibilities for Data Security, Genetic Nondiscrimination, and Anti-Bribery Laws     By W. Scott Blackmer and Richard Santalesa, InfoLawGroup, LLP Global HR and International Background Check Best Practices     By Terry Corley, Aletheia Consulting Group Compliance: Old Wine in New Wineskins?     By Ursula Christina Fellberg, Ph.D., UCF-StrategieBeraterin Join the HR/HR technology professionals who have subscribed for so many years to IHRIM’s publications and become a reader today by visiting  http://www.ihrimpublications.com/amember/signup.php.  

    Read the article

  • How do I prevent libvirt from adding iptables rules for guest NAT networks?

    - by Jack Douglas
    Similar to this old request on BugZilla for Fedora 8, I'm hoping something has changed since then or someone knows another way. I want to manage the iptables rules by hand—the one-size-fits-all automatic rules don't suit me at all. These rules seem to be added and removed when a network is started and destroyed. Is there a way of either preventing these rules being added at all or hooking a script into the network start that restores the default rules afterwards. For now, I'm using a very crude method with cron, but I hope there is a better way: * * * * * root iptables-restore < /etc/sysconfig/iptables

    Read the article

  • Bad Mumble control channel performance in KVM guest

    - by aef
    I'm running a Mumble server (Murmur) on a Debian Wheezy Beta 4 KVM guest which runs on a Debian Wheezy Beta 4 KVM hypervisor. The guest machines are attached to a bridge device on the hypervisor system through Virtio network interfaces. The Hypervisor is attached to a 100Mbit/s uplink and does IP-routing between the guest machines and the remaining Internet. In this setup we're experiencing a clearly recognizable lag between double-clicking a channel in the client and the channel joining action happening. This happens with a lot of different clients between 1.2.3 and 1.2.4 on Linux and Windows systems. Voice quality and latency seems to be completely unaffected by this. Most of the times the client's information dialog states a 16ms latency for both the voice and control channel. The deviation for the control channels mostly is a lot higher than the one of the voice channels. In some situations the control channel is displayed with a 100ms ping and about 1000 deviation. It seems the TCP performance is a problem here. We had no problems on an earlier setup which was in principle quite like the new one. We used Debian Lenny based Xen hypervisor and a soft-virtualised guest machine instead and an earlier version of the Mumble 1.2.3 series. The current murmurd --version says: 1.2.3-349-g315b5f5-2.1

    Read the article

  • An Interview with JavaOne Rock Star Martijn Verburg

    - by Janice J. Heiss
    An interview with JavaOne Rock Star Martijn Verburg, by yours truly, titled “Challenging the Diabolical Developer: A Conversation with JavaOne Rock Star Martijn Verburg,” is now up on otn/java. Verburg, one of the leading movers and shakers in the Java community, is well known for his ‘diabolical developer” talks at JavaOne where he uncovers some of the worst practices that Java developers are prone to. He mentions a few in the interview: * “A lack of communication: Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson.* No source control: Some developers simply store code in local file systems and e-mail the code in order to integrate their changes; yes, this still happens.* Design-driven design: Some developers are inclined to cram every design pattern from the Gang of Four (GoF) book into their projects. Of course, by that stage, they've actually forgotten why they're building the software in the first place.” He points to a couple of core assumptions and confusions that lead to trouble: “One is that developers think that the JVM is a magic box that will clean up their memory and make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try to force Java (the language) to do something it's not very good at, such as rapid Web development. So you get a proliferation of overly complex frameworks, libraries, and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!” Verburg has many insightful things to say about how to keep a Java User Group (JUG) going, about the “Adopt a JSR” program, bugathons, and much more. Check out the article here.

    Read the article

  • Why Does Adding a UDF or Code Truncates the # of Resources in List?

    - by Jeffrey McDaniel
    Go to the Primavera - Resource Assignment History subject area.  Go under Resources, General and add fields Resource Id, Resource Name and Current Flag. Because this is using a historical subject area with Type II slowly changing dimensions for Resources you may get multiple rows for each resource if there have been any changes on the resource.  You may see a few records with current flags = 0, and you will see a row with current flag = 1 for all resources. Current flag = 1 represents this is the most up to date row for this resource.  In this query the OBI server is only querying the W_RESOURCE_HD dimension.  (Query from nqquery log) select distinct 0 as c1,      D1.c1 as c2,      D1.c2 as c3,      D1.c3 as c4 from       (select distinct T10745.CURRENT_FLAG as c1,                T10745.RESOURCE_ID as c2,                T10745.RESOURCE_NAME as c3           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */            where  ( T10745.LAST_RUN_PER_DAY_FLAG = 1 )       ) D1 If you add a resource code to the query now it is forcing the OBI server to include data from W_RESOURCE_HD, W_CODES_RESOURCE_HD, as well as W_ASSIGNMENT_SPREAD_HF. Because the Resource and Resource Codes are in different dimensions they must be joined through a common fact table. So if at anytime you are pulling data from different dimensions it will ALWAYS pass through the fact table in that subject areas. One rule is if there is no fact value related to that dimensional data then nothing will show. In this case if you have a list of 100 resources when you query just Resource Id, Resource Name and Current Flag but when you add a Resource Code the list drops to 60 it could be because those resources exist at a dictionary level but are not assigned to any activities and therefore have no facts. As discussed in a previous blog, its all about the facts.   Here is a look at the query returned from the OBI server when trying to query Resource Id, Resource Name, Current Flag and a Resource Code.  You'll see in the query there is an actual fact included (AT_COMPLETION_UNITS) even though it is never returned when viewing the data through the Analysis. select distinct 0 as c1,      D1.c2 as c2,      D1.c3 as c3,      D1.c4 as c4,      D1.c5 as c5,      D1.c1 as c6 from       (select sum(T10754.AT_COMPLETION_UNITS) as c1,                T10706.CODE_VALUE_02 as c2,                T10745.CURRENT_FLAG as c3,                T10745.RESOURCE_ID as c4,                T10745.RESOURCE_NAME as c5           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */ ,                W_CODES_RESOURCE_HD T10706 /* Dim_W_CODES_RESOURCE_HD_Resource_Codes_HD */ ,                W_ASSIGNMENT_SPREAD_HF T10754 /* Fact_W_ASSIGNMENT_SPREAD_HF_Assignment_Spread */            where  ( T10706.RESOURCE_OBJECT_ID = T10754.RESOURCE_OBJECT_ID and T10706.LAST_RUN_PER_DAY_FLAG = 1 and T10745.ROW_WID = T10754.RESOURCE_WID and T10745.LAST_RUN_PER_DAY_FLAG = 1 and T10754.LAST_RUN_PER_DAY_FLAG = 1 )            group by T10706.CODE_VALUE_02, T10745.RESOURCE_ID, T10745.RESOURCE_NAME, T10745.CURRENT_FLAG      ) D1 order by c4, c5, c3, c2 When querying in any subject area and you cross different dimensions, especially Type II slowly changing dimensions, if the result set appears to be short the first place to look is to see if that object has associated facts.

    Read the article

  • HTG Explains: Do You Need to Worry About Updating Your Desktop Programs?

    - by Chris Hoffman
    There was a time when we had to worry about manually updating desktop applications. Adobe Flash and Reader were full of security holes and didn’t update themselves, for example — but those days are largely behind us. The Windows desktop is the only big software platform that doesn’t automatically update applications, forcing every developer to code their own updater. This isn’t ideal, but developers have now largely stepped up to the plate.    

    Read the article

  • How do I make wallpaper fit both monitors in dual monitor setup?

    - by Ben
    I am deploying some custom corporate wallpaper as part of a Windows 7 rollout. Some people will be using dual monitors, and the additional monitors may be either 4:3 or widescreen. I want to use the same wallpaper on both screens (i.e. 2 copies of the same wallpaper, not stretched across both.) If I set the background to "stretch", it uses the aspect ratio of the primary monitor to stretch the wallpaper on both monitors. So, for example, if I have a dual monitor setup using a 4:3 TFT as primary and my (widescreen) laptop LCD as secondary - the image shows on the laptop LCD in 4:3, with a black stripe down either side. I've only noticed this as an issue with my "custom" wallpaper. Both the default MS wallpaper and the built in Lenovo wallpaper don't seem to have this issue. Is this by using "trickery" such as using an image larger than the largest resolution you will have and centering it? (i.e. so you crop out part of the image.) Or can this be done "properly"? I don't want to use 3rd party software to do this, but would happily do a bit of Powershell scripting if this would solve the issue. Thanks in advance, Ben

    Read the article

  • Using position function for accessing particular node when using While Activity in SOA 11.1.1.5

    - by AJ
    Hi If you are using while activity in SOA Suite 11.1.1.5 and within loop you have a requirement to access repeating node of XML. You might need to use below XPATH expression for accessing the node. Here is the XML that I am using for this example <?xml version='1.0' encoding='UTF-8'?> David DemoJob 1 2012-04-15 40000 0 10 Steve TestJob 1 2012-04-15 40000 0 10 Here you can notice that Emp node is repeating i.e. EmpCollection node will contain multiple employees. Now in loop one of assign activity you need to access a particular node for e.g. For first time loop runs you want to access first node and second time second node and so on. You need to make use of postion() function like bpws:getVariableData('Receive1_Read_InputVariable','body','/ns4:EmpCollection/ns4:Emp[position()=$loopCounter]/ns4:job') Please Note: Here loopCounter is a variable that we have created of type xsd:int and prior to loop we have initialized a value of 1. Loop will run depending on the number of Emp nodes present at runtime. For that in while Activity you can use below XPATH expression ora:countNodes('Receive1_Read_InputVariable','body','/ns4:EmpCollection/ns4:Emp')=bpws:getVariableData('loopCounter') Do let me know in case of any issues or concern. Cheers AJ

    Read the article

  • How to remove Analyze option from the report in OBI 11.1.1.7.0 ?

    - by Varun
    Que) How to remove Analyze option from the report in OBI 11.1.1.7.0 ? Ans) You can change the properties of a dashboard and its pages. Specifically, you can: Change the style and description of the dashboard Add hidden named prompts to the dashboard and to its pages Specify which links (Analyze, Edit, Refresh, Print, Export, Add to Briefing Book, and Copy) are to be included with analyses at the dashboard level. Note that you can set these links at the dashboard page level and the analysis level, which override the links that you set at the dashboard level.  Rename, hide, reorder, set permissions for, and delete pages. Specify which accounts can save shared customizations and which accounts can assign default customizations for pages, and set account permissions. Specify whether the Add to Briefing Book option is to be included in the Page Options menu for pages. To change the properties of a dashboard and its pages: Edit the dashboard.  Click the Tools toolbar button and select Dashboard Properties. The "Dashboard Properties dialog" is displayed. Make the property changes that you want and click OK. Click the Save toolbar button.

    Read the article

  • Using Coherence API to get POF bytes

    - by Bruno.Borges
    Someone raised the question on how to use the Coherence API to get the bytes of an object in POF (Portable Object Format) programatically. So I came up with this small code that shows the very cool API simple usage :-)   SimplePofContext spc = new SimplePofContext();    spc.registerUserType(0, User.class, new UserSerializer());    // consider UserSerializer as an implementation of PofSerializer            User u = new User();    u.setId(21);    u.setName("Some Name");    u.setEmail("[email protected]");            ByteArrayOutputStream baos = new ByteArrayOutputStream();    DataOutput dataOutput = new DataOutputStream(baos);    BufferOutput bufferOutput = new WrapperBufferOutput(dataOutput);    spc.serialize(bufferOutput, u);            byte[] byteArray = baos.toByteArray();    System.out.println(Arrays.toString(byteArray));  Easy, isn't?

    Read the article

  • need recommendation for running PHP/Zend based optimizer

    - by senorsmile
    Firstly, I must admit that I don't know much about setting up PHP beyond the basics. I have an Ubuntu 10.04 server system (hosted) running primarily as an FTP store for a commercial store software. The server that the commercial store is installed on is unfortunately not very reliable, and would like to move that to this Ubuntu 10.04 server. (We've already received permission from the store vendor to do this.) My problem is that they use Zend optimizer which is only compatible with PHP 5.2. I have tried a couple of "hacks" to downgrade PHP to 5.2, but it breaks so many other things that it doesn't seem worth it. My idea is to install some sort of container of Ubuntu 8.04 (like OpenVZ) on the server to house a native install of PHP 5.2 to meet the dependency of the store software. However, it appears that OpenVZ is no longer supported on Ubuntu. Is there another solution similar that I could run on a hosted server to installed a separate "container-like" 8.04 system?

    Read the article

  • Personalized Pricing

    - by David Dorf
    In past postings I've spent a fair amount of time talking about targeted promotions.  Using a complete view of the customer that includes purchase history, location history, and psychographics gleaned from social media, we can select the offer with the greatest chance of redemption.  This is done to influence shopping behavior, which might be introducing the consumer to a new product line, increasing their basket size, increasing frequency of purchases, etc. Safeway seems to be taking a slightly different approach with their personalized pricing.  In additional to offering electronic coupons and club card offers, they are also providing a personalized price for certain items based on purchase history.  So when Sally want to shop at Safeway, she first checks the "Just for U" website for three types of deals.  She starts by selecting manufacturer coupons to load into her loyalty card, then she checks the Club Card for offers like "buy one get one free." The third step is the interesting one.  Safeway will set a particular lower price for Sally good for 90 days on items she buys often.  Clearly this isn't enforcing a new behavior but rather instilling loyalty.  I would love to know exactly how they are determining the personalized price.  Of course bargain hunters can still stack the three offers so they can, for example, get their $4.99 Oatmeal for $0.72. I like this particular question and answer from their website's FAQ: My offers are not that great. Can I tell you what offers I need? That's a good idea. That functionality is not currently available, but we appreciate your input and are constantly improving our just for U program. Stay tuned for exciting enhancements! I suppose if Safeway is tracking all the purchases, they can easily determine whether the customer if profitable.  As long as the customer stays profitable, why not let them determine a few offers themselves?  Food for thought.

    Read the article

  • Einladung zur FraOSUG am 17. April 2012 (20. Treffen)

    - by uligraef
    Das  20. Treffen der FraOSUG findet am 17. April 2012 statt.Wann?   17. April 2012, 18:00 - ca. 21:00 UhrWo?  Commerzbank AG, DLZ5/PHH, Hafenstraße 51, FrankfurtAgenda:  Darwin Calendar Server unter OpenIndiana  Illumos und OpenIndiana News SoftWORM mit ZFS (Teil 3) Diskussion Mehr Details und genaue Anfahrt siehe: http://www.fraosug.de Anmeldung via: http://www.doodle.com/ugsbaxxrunkbun66

    Read the article

  • Customer Experience and BPM – From Efficiency to Engagement

    - by Ajay Khanna
    Over the last few years, focus of BPM has been mainly to improve the businesses efficiency. To create more efficient processes, to remove bottlenecks, to automate processes. That still holds true and why not? Isn’t BPM all about continuous improvement? BPM facilitates and requires business and IT collaboration. But business also requires working with customer. Do we not want to get close to and collaborate with our customers? This is where Social BPM takes BPM a step further. It not only allows people within an organization to collaborate to design exceptional processes, not only lets them collaborate on resolving a case but also let them engage with the customers. Engaging with customer means, first of all, connecting with them on their terms and turf. Take a new account opening process. Can a customer call you and initiate the process? Can a customer email you, or go to the website and initiate the process? Can they tweet you and initiate the process? Can they check the status of process via any channel they like? Can they take a picture of damaged package delivery and kick-off a returns process from their mobile device, with GIS data? Yes, these are various aspects to consider during process design if the goal is better customer experience and engagement. Of course, we want to be efficient and agile, but the focus here needs to be the customer. Now when the customer is tweeting about your products, posting on Facebook and Yelp about their experience with your company (and your process), you need to seek out that information. You need to gather and analyze the customer’s feedback on the social media and use that information to improve the processes and products. This is an excellent source of product and process ideation. So BPM is no longer only about improving back-office process efficiency, it is moving into a new and exciting phase of improving frontline customer facing processes, customer experience and engagement. Let me know how you think BPM can enhance customer experience.

    Read the article

  • Iterative and Incremental Principle Series 2: Finding Focus

    - by llowitz
    Welcome back to the second blog in a five part series where I recount my personal experience with applying the Iterative and Incremental principle to my daily life.  As you recall from part one of the series, a conversation with my son prompted me to think about practical applications of the Iterative and Incremental approach and I realized I had incorporated this principle in my exercise regime.    I have been a runner since college but about a year ago, I sustained an injury that prevented me from exercising.  When I was sufficiently healed, I decided to pick it up again.  Knowing it was unrealistic to pick up where I left off, I set a goal of running 3 miles or approximately for 30 minutes.    I was excited to get back into running and determined to meet my goal.  Unfortunately, after what felt like a lifetime, I looked at my watch and realized that I had 27 agonizing minutes to go!  My determination waned and my positive “I can do it” attitude was overridden by thoughts of “This is impossible”.   My initial focus and excitement was not sustained so I never met my goal.   Understanding that the 30 minute run was simply too much for me mentally, I changed my approach.   I decided to try interval training.  For each interval, I planned to walk for 3 minutes, then jog for 2 minutes, and finally sprint for 1 minute, and I planned to repeat this pattern 5 times.  I found that each interval set was challenging, yet achievable, leaving me excited and invigorated for my next interval.  I easily completed five intervals – or 30 minutes!!  My sense of accomplishment soared. What does this have to do with OUM?  Have you heard the saying -- “How do you eat an elephant?  One bite at a time!”?  This adage certainly applies in my example and in an OUM systems implementation.  It is easier to manage, track progress and maintain team focus for weeks at a time, rather than for months at a time.   With shorter milestones, the project team focuses on the iteration goal.  Once the iteration goal is met, a sense of accomplishment is experience and the team can be re-focused on a fresh, yet achievable new challenge.  Join me tomorrow as I expand the concept of Iterative and incremental by taking a step back to explore the recommended approach for planning your iterations.

    Read the article

  • Is there a way to exclude a specific drive vdi from "snapshots" in VirtualBox?

    - by Graza
    ...or is there another space-efficient way of dealing with the page/swap file of the Guest O/S? I've realised that its quite possible/likely that one of the things which "bloats" the snapshot/diff vdi's when a snapshot is taken is the guest operating system's pagefile. For example, say I have a 2Gb swap-file in a Windows guest OS, and over the course of a few weeks the usage of the swap file has gone over 1Gb a couple of times. When I next create a snapshot, it seems likely that I'd be almost guaranteed around 1Gb of space taken up in the new differencing disk just because of changes in the swap file. Obviously (provided I never did "live" snapshots on running or paused machines, and only ever did them when the machine was shut down), I would not need any of the information in the swap file to be saved. So this would simply be a waste of 1Gb. I'm wondering if there's a way to attach a vdi to a VM and flag it as "exclude from snapshots" - which would mean I could put the swap file on a different vdi which would never be included in a snapshot. Or if anyone has any other suggestions. Or an explanation about why it might not be an issue. I could obviously delete and recreate a swap drive vdi every time I did a snapshot to achieve the same effect, but this is a little more effort than simply clicking "create snapshot"....

    Read the article

  • Let Me Show You Something: Instagram, Vine and Snapchat for Brands

    - by Mike Stiles
    While brands are well aware of how much more impactful images are than text-only posts on social channels, today you’re additionally being presented with platform after additional platform for hosting, doctoring and sharing photos and videos.  Can you play in every sandbox? And if you do, can you be brilliant on all of them? As has usually been the case, so far brands are sticking their toes into new platforms while not actually committing to them, or strategizing for them, or resourcing them. TrackMaven found of the 123 F500 companies using Instagram, only 22% of them are active on it. Likewise, research from Simply Measured found brands are indeed jumping in, with the number establishing a presence on Instagram up 55% over the past year. Users want them there…brand engagement has exploded 350%, and over 1/3 of the top brands have at least 10,000 followers. BUT…the top 10 brands are generating 33% of all posts, reaping 83% of all engagement. Things are also growing on Twitter’s Vine, the 6-second looping video app that hit 40 million users in August. The 7th Chamber says 5 tweets a second contain a Vine link. Other studies say branded Vines are 4 times more likely to be shared and seen than rank-and-file branded videos. Why? Users know that even if a video is pure junk, they won’t get robbed of too much of their valuable time. Vine is always upgrading so you can make sure your videos are worth viewers’ time. You can now edit videos, and save & work on several projects concurrently. What you can’t do is upload a finely crafted video into Vine, but you can do that with Instagram. The key to success? Same as with all other content; make it of value. Deliver a laugh or a lesson or both. How-to, behind the scenes peeks, contests, demos, all make sense in the short video format. Or follow Nash Grier’s example, which is to just have fun with and connect to your viewers, earning their trust that your next Vine will be as good as the last. Nash is only 15, has over 1.4 million followers, and adds about 100,000 a week. He broke out when one of his videos was re-Vined by some other kid with 300,000 followers. Make good stuff, get it in front of influencers, and your brand Vines could break out as well. Then there’s Snapchat, the “this photo will self destruct” platform. How can that be of use to brands besides offering coupons that really expire? The jury is out. But with an audience of over 100 million and a valuation of $800 million, media-with-a-time-limit is compelling. Now there’s “Snapchat Stories” that can last 24 hours and be shared to the public at large. You might be able to capitalize on how much more focus gets put on content when there’s a time limit on its availability. The underlying truth to all of this is, these are all tools. Very cool, feature rich tools, but tools. You can give the exact same art kit to 5 different people and you’d get back 5 very different works, ranging from worthless garbage to masterpiece. Brands are being called upon to be still and moving image artists. That’s what your customers are used to seeing, from a variety of sources. Commit to communicating with them accordingly. @mikestiles Photo: stock.xchng

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • What are the requirements for Windows Remote Assistance over Teredo?

    - by Jens
    I try to get the Windows 7 (or Vista) remote assistance feature to work, without using UPnP on the novices computer. After enabling Teredo on the expert's computer (that is in a corporate network, and therefore has teredo disabled by default), I tried to connect to the novice both using Easy Connect and the invitation file with no success. My triubleshooting included the following (so far). A connection to the novice from my home pc was successful, hinting at a misconfiguration on the experts side. Both computers have a "qualified" connection to the Teredo Server. Both computers have a valid Teredo IP, access to the Global_ PNRP cloud and can resolve names registered with PNRP on the other computer. The expert can resolve the PNRP Id automatically generated with an Easy Connect help request Both computers can ping the other's PNRP name. Both computers can ping the other's Teredo IP Address using ping -6 Now, I am a little stumped. I expected Remote Assistance to work at this point, since my corporate firewall has no Teredo filtering. What could RA cause not to work in this setting? Thanks in advance!

    Read the article

  • Set up Work Manager Shutdown Trigger in WebLogic Server 10.3.4 Using WLST

    - by adejuanc
    WebLogic Server's Work Managers provide a way to control work and allocated threads. You can set different scheduling guidelines for different applications, depending on your requirements. There is a default self-tuning Work Manager, but you might want to set up a custom work manager in some circumstances: for example, when you want the server to prioritize one application over another when a response time goal is required, or when a minimum thread constraint is needed to avoid deadlock. The Work Manager Shutdown Trigger is a tool to help with stuck threads in which will do the following: Shut down the Work Manager. Move the application to Admin State (not active). Change the Server instance health state to failed. Example of a Shutdown Trigger set on the config.xml for your domain: <work-manager>   <name>stuckthread_workmanager</name>   <work-manager-shutdown-trigger>     <max-stuck-thread-time>30</max-stuck-thread-time>     <stuck-thread-count>2</stuck-thread-count>   </work-manager-shutdown-trigger> </work-manager> Understand that any misconfiguration on the Work Manager can lead to poor performance on the server. Any changes must be done and tested before going to production. How can one create a WorkManagerShutdownTrigger for WLS 10.3.4 using WLST? You should be able to create a WorkManagerShutdownTrigger using WLST by following these steps: edit() startEdit() cd('/SelfTuning/mydomain/WorkManagers') create('myWM','WorkManager') cd('myWM/WorkManagerShutdownTrigger') create('myWMst','WorkManagerShutdownTrigger') cd('myWMst') ls()

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • MySQL for Beginners course - first steps to lowering your Database TCOs

    - by Antoinette O'Sullivan
    Thinking about lowering your Database TCO by using the MySQL Server? Don't miss the chance to get training from the source! With the newly released MySQL for Beginners class, learn how this powerful relational database management system can make your life easier and more fun! This course covers all the basics and will get you on your way, with a solid foundation. This instructor led, hands-on class covers the fundamentals of SQL and relational databases, using MySQL as a teaching tool. Send information about this course release to a friend who might be considering getting started on the world's most popular small footprint database.

    Read the article

  • Stakeholder Management in OUM

    - by user719921
    Where is Stakeholder Management in OUM?  Stakeholder Management typically falls into the purview of the Project Manager, which means much of the associated guidance is found in the OUM Manage Focus Area (a.k.a. Manage).  There is no process in Manage named Stakeholder Management, but this “touch point” can be found in a variety of other processes including Bid Transition (BT), Communication Management (CMM) and Organizational Change Management (OCHM). •         Stakeholder management starts in the Bid Transition process with Stakeholder Analysis •         This Stakeholder Analysis is used to build the Project Team Communication Plan in the Communication Management process. •         Stakeholder management should be executed during the Execution and Control phase.  For example, as issues are resolved, the project manager should take the action item to follow up with the affected stakeholders to ensure they are aware that the issue has been resolved. •       The broader topic of Stakeholder management is also addressed very thoroughly in the Organizational Change Management process in the Implement Focus Area, which is a touch point to the Organizational Change Management process in Manage. Check it out and let me know your thoughts!

    Read the article

  • NFJS Central Iowa Software Symposium Des Moines Trip Report

    - by reza_rahman
    As some of you may be aware, I recently joined the well-respected US based No Fluff Just Stuff (NFJS) Tour. If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. The NFJS Central Iowa Software Symposium was held August 8 - 10 in Des Moines. The attendance at the event and my sessions was moderate by comparison to some of the other shows. It is one of the few events of it's kind that take place this part the country so it is extremely important. I had five talks total over two days, more or less back-to-back. The first one was my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. The second talk (on the second day) was our flagship Java EE 7/8 talk. Currently the talk is basically about Java EE 7 but I'm slowly evolving the talk to transform it into a Java EE 8 talk as we move forward. The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third was my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE". The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. I finishd off the event with a talk titled Building Java HTML5/WebSocket Applications with JSR 356. The talk introduces HTML 5 WebSocket, overviews JSR 356, tours the API and ends with a small WebSocket demo on GlassFish 4. The slide deck for the talk is posted below. Building Java HTML5/WebSocket Applications with JSR 356 from Reza Rahman The demo code is posted on GitHub: https://github.com/m-reza-rahman/hello-websocket. My next NFJS show is the Greater Atlanta Software Symposium on September 12 - 14. Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: September 12 - 14, Atlanta. September 19 - 21, Boston. October 17 - 19, Seattle. I hope you'll take this opportunity to get some updates on Java EE as well as the other useful content on the tour?

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >