Search Results

Search found 7473 results on 299 pages for 'usage statistics'.

Page 138/299 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Windows 7 Service Pack 1 is Released: But Should You Install It?

    - by The Geek
    Microsoft has just released the final version of Service Pack 1 for Windows 7, but should you drop everything and go through the process of installing it? Where can you get it? We’ve got the answers for you. If you’ve never installed a service pack before, it’s just a big collection of fixes and changes for your operating system, bundled into a big fat download to make it more convenient if you reinstall—if you’ve kept Windows updated, it should have most of the fixes already installed through Windows Update Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Never Call Me at Work [Humorous Star Wars Video] Add an Image Properties Listing to the Context Menu in Chrome and Iron Add an Easy to View Notification Badge to Tabs in Firefox SpellBook Parks Bookmarklets in Chrome’s Context Menu Drag2Up Brings Multi-Source Drag and Drop Uploading to Firefox Enchanted Swing in the Forest Wallpaper

    Read the article

  • update-apt-xapian-index hogs CPU, even when Update Manager is set to not check for updates

    - by Dave M G
    I have a slightly older laptop running Ubuntu 11.10. It runs fine, but frequently, when I start it up, the CPU monitor in my Gnome Panel shows 100% usage for for what can be up to five minutes or so. It seems that the offending process is update-apt-xapian-index, which, if I understand correctly, is the update manager checking for updates. I have gone into the update manager settings, and selected to never check for updates. I'll do that manually when I feel like I have the time to leave the laptop running for that. However, despite my selection, this still happens. Roughly 50% of the time or more, when I start my laptop, it runs update-apt-xapian-index. How can I get the update manager to respect my settings, or at least to get this process to stop eating my CPU cycles?

    Read the article

  • What's a way to implement a flexible buff/debuff system?

    - by gkimsey
    Overview: Lots of games which RPG-like statistics allow for character "buffs", ranging from simple "Deal 25% extra damage" to more complicated things like "Deal 15 damage back to attackers when hit." The specifics of each type of buff aren't really relevant. I'm looking for a (presumably object-oriented) way to handle arbitrary buffs. Details: In my particular case, I have multiple characters in a turn-based battle environment, so I envisioned buffs being tied to events like "OnTurnStart", "OnReceiveDamage", etc. Perhaps each buff is a subclass of a main Buff abstract class, where only the relevant events are overloaded. Then each character could have a vector of buffs currently applied. Does this solution make sense? I can certainly see dozens of event types being necessary, it feels like making a new subclass for each buff is overkill, and it doesn't seem to allow for any buff "interactions". That is, if I wanted to implement a cap on damage boosts so that even if you had 10 different buffs which all give 25% extra damage, you only do 100% extra instead of 250% extra. And there's more complicated situations that ideally I could control. I'm sure everyone can come up with examples of how more sophisticated buffs can potentially interact with each other in a way that as a game developer I may not want. As a relatively inexperienced C++ programmer (I generally have used C in embedded systems), I feel like my solution is simplistic and probably doesn't take full advantage of the object-oriented language. Thoughts? Has anyone here designed a fairly robust buff system before?

    Read the article

  • When to use functional programming approach and when not? (in Java)

    - by john smith optional
    let's assume I have a task to create a Set of class names. To remove duplication of .getName() method calls for each class, I used org.apache.commons.collections.CollectionUtils and org.apache.commons.collections.Transformer as follows: Snippet 1: Set<String> myNames = new HashSet<String>(); CollectionUtils.collect( Arrays.<Class<?>>asList(My1.class, My2.class, My3.class, My4.class, My5.class), new Transformer() { public Object transform(Object o) { return ((Class<?>) o).getName(); } }, myNames); An alternative would be this code: Snippet 2: Collections.addAll(myNames, My1.class.getName(), My2.class.getName(), My3.class.getName(), My4.class.getName(), My5.class.getName()); So, when using functional programming approach is overhead and when it's not and why? Isn't my usage of functional programming approach in snippet 1 is an overhead and why?

    Read the article

  • Qt Graphics et performance - velours et QML Scene Graph, un article de Gunnar, traduit par Thibaut Cuvelier

    Le 11 décembre 2009, la documentation de QPainter subissait un énorme ajout concernant l'optimisation de son utilisation. En effet, le bon usage de cet outil n'était pas accessible à tous, il n'était pas présenté dans la documentation. Ceci ne fut qu'un prétexte à une série d'articles sur l'optimisation de QPainter et de Qt Graphics en général. Voici donc le premier article de cette série, les autres sont en préparation : Qt Graphics et performances : ce qui est critique et ce qui ne l'est pas Pensez-vous que cet ajout à la documentation de QPainter sera utile ? Trouvez-vous les performances de vos applications trop faibles ?...

    Read the article

  • OSB, Service Callouts and OQL - Part 1

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This post will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. Please refer to the blog post for more details. 

    Read the article

  • How to install Edubuntu on a system with low memory (256 Mb)?

    - by int_ua
    I'm preparing an old system with 256 Mb RAM to send it to some children. It doesn't have Ethernet controller and there are no Internet access at the destination. I've chosen Edubuntu for obvious reasons and modified it with UCK trying to minimize memory usage just to install, let alone using it yet. But Ubiquity won't start even in openbox (edited /etc/lightdm/lightdm.conf) because there are no space left on /cow right after booting. I've already deleted things like ibus, zeitgeist, update-manager (no network access after all), twisted-core, plymouth logos. I'm thinking about creating a swap partition on HDD, can it be later added to expand this /cow ? Is there a package for the text-mode installation which is used on Alternate CDs? I don't want to re-create Edubuntu from an Alternate CD. This behavior is reproducible in VM limited to 256Mb RAM.

    Read the article

  • Use open-source programs in your company?

    - by eversor
    Is there any cons of making your employees use open-source programs in your company? I am planning to start a bussiness and I wonder why companies usually work with proprietary software, as Microsoft Word to quote the most famous one. Why do not they use Open Office (or Libre Office) etc.? From my point of view, you can save a lot of money and help the open-source community by, for instance, giving them part of your benefits in form of donations. I do not know any (medium-big) company that does this. Probably you could give me some examples, just to prove that this model of open-source usage/collaboration works rocks.

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Azure Florida Association: New user group announcement

    - by Herve Roggero
    I am proud to announce the creation of a new virtual user group: the Azure Florida Association. The missiong of this group is to bring national and internaional speakers to the forefront of the Florida Azure community. Speakers include Microsoft employees, MVPs and senior developers that use the Azure platform extensively. How to learn about meetings and the group Go to http://www.linkedin.com/groups?gid=4177626 First Meeting Announcement Date: January 25 2012 @4PM ET Topic: Demystifying SQL Azure Description: What is SQL Azure, Value Proposition, Usage scenarios, Concepts and Architecture, What is there and what is not, Tips and Tricks Bio: Vikas is a versatile technical consultant whose knowledge and experience ranges from products to projects, from .net to IBM Mainframe Assembler.  He has lead and mentored people on different technical platforms, and has focused on new technologies from Microsoft for the past few years.  He is also takes keen interest in Methodologies, Quality and Processes.

    Read the article

  • Digg Reader déjà disponible en version bêta, Facebook proposera-t-il aussi finalement une alternative à Google Reader ?

    Google met fin à Google Reader et à son API, dans son traditionnel « ménage de printemps » : pas assez rentable Google officialise la fin de son service de flux RSS.Mis sur pied en 2005, Google Reader permettait aux internautes d'agréger et d'organiser des fils d'informations à leur gré. Mais depuis l'avènement des réseaux sociaux tels que les géants Facebook et Twitter, le service connait beaucoup moins de succès.La compagnie veut pouvoir se recentrer sur ses projets phares pour être la plus compétitive possible. Elle recommande donc aux internautes qui en font encore usage de récupérer les données stockées avant l'échéance. Dans une page sur son blog elle explique les procédures de téléchargem...

    Read the article

  • Effects of automated time tracking/monitoring [closed]

    - by user73937
    What are the effects of monitoring the developers' computer usage? (Which program they use - based on the title of the applications - and how much time in a day they use the keyboard and mouse.) Would it has any positive or negative effects on productivity, morale, motivation, etc? It will not have any direct impact on the developers' salary or their performance review it's just for curiosity. The developer and their manager will only see the results. Would it change anything if only the developer is allowed to see the results? The developer can disable the monitoring (for privacy) but it won't count as work time (in the monitoring program).

    Read the article

  • Is it possible to construct a cube with less than 24 vertices

    - by Telanor
    I have a cube-based world like minecraft and I'm wondering if there's a way to construct a cube with less than 24 vertices so I can reduce memory usage. It doesn't seem possible to me for 2 reasons: the normals wouldn't come out right and per-face textures wouldn't work. Is this the case or am I wrong? Maybe there's some fancy new dx11 tech that can help? Edit: Just to clarify, I have 2 requirements: I need surface normals for each cube face in order to do proper lighting and I need a way to address a different indexes in a texture array for each cube face

    Read the article

  • Computer Lab School for Orphans

    - by Brendon
    I am helping out an NGO, called Orphans Found Fund, here in Arusha Tanzania setup a computer lab to teach students about Ubuntu and open source applications. I have installed Ubuntu 10.10 on all the systems. What I'm wondering about is how to tweak the systems so that the kids cannot: Delete or alter system files Alter the system settings Add or remove applications Exceed a time limit (like an Internet Cafe) Also as the administrator I would like to monitor the usage for another system to make sure that abuse of network is not taking place. Any advice is much appreciated. Brendon

    Read the article

  • How You Helped Shape Java EE 7...

    - by reza_rahman
    I have been working with the JCP in various roles since EJB 3/Java EE 5 (much of it on my own time), eventually culminating in my decision to accept my current role at Oracle (despite it's inevitable set of unique challenges, a role I find by and large positive and fulfilling). During these years, it has always been clear to me that pretty much everyone in the JCP genuinely cares about openness, feedback and developer participation. Perhaps the most visible sign to date of this high regard for grassroots level input is a survey on Java EE 7 gathered a few months ago. The survey was designed to get open feedback on a number of critical issues central to the Java EE 7 umbrella specification including what APIs to include in the standard. When we started the survey, I don't think anyone was certain what the level of participation from developers would really be. I also think everyone was pleasantly surprised that a large number of developers (around 1100) took the time out to vote on these very important issues that could impact their own professional life. And it wasn't just a matter of the quantity of responses. I was particularly impressed with the quality of the comments made through the survey (some of which I'll try to do justice to below). With Java EE 7 under our belt and the horizons for Java EE 8 emerging, this is a good time to thank everyone that took the survey once again for their thoughts and let you know what the impact of your voice actually was. As an aside, you may be happy to know that we are working hard behind the scenes to try to put together a similar survey to help kick off the agenda for Java EE 8 (although this is by no means certain). I'll break things down by the questions asked in the survey, the responses and the resulting change in the specification. APIs to Add to Java EE 7 Full/Web Profile The first question in the survey asked which of four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. Developers by and large wanted all the new APIs added to the full platform. The comments expressed particularly strong support for WebSocket and JCache. Others expressed dissatisfaction over the lack of a JSON binding (as opposed to JSON processing) API. WebSocket, JSON-P and JBatch are now part of Java EE 7. In addition, the long-awaited Java EE Concurrency Utilities API was also included in the Full Profile. Unfortunately, JCache was not finalized in time for Java EE 7 and the decision was made not to hold up the Java EE release any longer. JCache continues to move forward strongly and will very likely be included in Java EE 8 (it will be available much sooner than Java EE 8 to boot). An emergent standard for JSON-B is also a strong possibility for Java EE 8. When it came to the Web Profile, developers were supportive of adding WebSocket and JSON-P, but not JBatch and JCache. Both WebSocket and JSON-P are now part of the Web Profile, now also including the already popular JAX-RS API. Enabling CDI by Default The second question asked whether CDI should be enabled in Java EE by default. The overwhelming majority of developers supported the default enablement of CDI. In addition, developers expressed a desire for better CDI/Java EE alignment (with regards to EJB and JSF in particular). Some developers expressed legitimate concerns over the performance implications of enabling CDI globally as well as the potential conflict with other JSR 330 implementations like Spring and Guice. CDI is enabled by default in Java EE 7. Respecting the legitimate concerns, CDI 1.1 was very careful to add additional controls around component scanning. While a lot of work was done in Java EE 6 and Java EE 7 around CDI alignment, further alignment is under serious consideration for Java EE 8. Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations (e.g. @BatchContext). A majority of developers wanted consistent usage of @Inject. The comments again reflected a strong desire for CDI/Java EE alignment. A lot of emphasis in Java EE 7 was put into using @Inject consistently. For example, the JBatch specification is focused on using @Inject wherever possible. JAX-RS remains an exception with it's existing custom injection annotations. However, the JAX-RS specification leads understand the importance of eventual convergence, hopefully in Java EE 8. Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A solid majority of developers supported the idea of making @Stereotype more universal in Java EE. The comments maintained the general theme of strong support for CDI/Java EE alignment Unfortunately, there was not enough time and resources in Java EE 7 to implement this fairly pervasive feature. However, it remains a serious consideration for Java EE 8. Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE. Developers strongly supported the concept. Along with injection, interceptors are now supported across all Java EE 7 components including Servlets, Filters, Listeners, JAX-WS endpoints, JAX-RS resources, WebSocket endpoints and so on. I hope you are encouraged by how your input to the survey helped shape Java EE 7 and continues to shape Java EE 8. Participating in these sorts of surveys is of course just one way of contributing to Java EE. Another great way to stay involved is the Adopt-A-JSR Program. A large number of developers are already participating through their local JUGs. You could of course become a Java EE JSR expert group member or observer. You should stay tuned to The Aquarium for the progress of Java EE 8 JSRs if that's something you want to look into...

    Read the article

  • PHP, when to use iterators, how to buffer results?

    - by Jon L.
    When is it best to use Iterators in PHP, and how can they be implemented to best avoid loading all objects into memory simultaneously? Do any constructs exist in PHP so that we can queue up results of an operation for use with an Iterator, while again avoiding loading all objects into memory simultaneously? An example would be a curl HTTP request against a REST server In the case of an HTTP request that returns all results at once (a la curl), would we be better off to go with streaming results, and if so, are there any limitations or pitfalls to be aware of? If using streaming, is it better to replace curl with a PHP native stream/socket? My intention is to implement Iterators for a REST client, and separately a document ORM that I'm maintaining, but only if I can do so while gaining benefits from reduced memory usage, increased performance, etc. Thanks in advance for any responses :-)

    Read the article

  • Open Source Project all dressed up but nowhere to go...

    - by Calanus
    Over the past 2 years myself and a colleague have built an online statistical analysis application using a mixture of silverlight, wcf and R. I (a c# programmer) wrote all the silverlight and wcf stuff whilst my colleague (a statistician) came up with the stats algorithms and wrote the R code. Now we think that this app is fairly unique - a rich gui online statistics application that is much more intuitive than all the other online stat apps that I've seen. But despite this we don't really know where to go with the project, mainly for the following reasons: 1) Its fairly complicated stuff - without the mix of programing and stats skills it would be difficult for anyone to "get into" the project and contribute. 2) We are stalled by a lack of a proper place to host the site. Currently it sits on the family windows 7 media centre, not exactly the best place to host it as it could interfere with the missus trying to watch Corrie/Friends/Oprah etc. Soo, anyone got any ideas on how to move forward with this? I guess that my strength is programing not marketing so despite working hard at this for the past couple of years I feel that I've reached a dead end! Also, does anyone know of any free windows hosting for open source projects? If I could find a proper place to put the app I might feel re-energised about the whole thing. The source code is on codeplex at: http://silverstats.codeplex.com, whilst the app is currently hosted at http://silverstats.co.uk

    Read the article

  • Constituent Experience Counts In Public Sector

    - by Michael Seback
      Businesses and government organizations are operating in an era of the empowered customer where service  and communication channels are challenged every day.  Consumers in the private sector have high expectations from purchasing gifts online, reading reviews on social sites, and expecting the companies they do business with to know and reward them.   In the Public Sector, constituents also expect government organizations to provide consistent and timely service across agencies and touch points.  Examples include requesting critical city services, applying for social assistance or reviewing insurance plans for a health insurance exchange. If an individual does not receive the services they need at the right time and place, it can create a dire situation – involving housing, food or healthcare assistance. Government organizations need to deliver a fast, reliable and personalized experience to constituents. Look at a few recent statistics from a Government focused survey: How do you define good customer service? 70 % improved services, 48% shortest time to provide information, 44% shortest time to resolve complaints What are ways/opportunities to improve customer service? 69% increased collaboration across agencies and 41% increased customer service channels Are you using data collected to make informed decisionsto improve customer service efforts? 39% data collection is limited, not used to improve decision making Source: Re-Imagining Customer Service in Government, 2012 Click here to see the highlights.  Would you like to get started – read Eight Steps to great constituent experiences for government.

    Read the article

  • Choppy window movement in Gnome 3.4 on Ubuntu 12.04

    - by mjrussell
    I have been using Gnome 3.2 since Ubuntu 11.10 was released and it has always been perfectly smooth and performed extremely well, much better than Unity. After doing a clean installation of Ubuntu 12.04, Gnome 3.4 performs less well. If just one window of a relatively simple application, such as Gnome Terminal, is opened and moved around, the movement is sometimes very choppy, but the rest of the time it's perfectly smooth. The times when it's choppy seem to be when part of the window goes off the bottom or right side of the screen. Also, if there are multiple windows open, it is almost always choppy. These facts suggest to me that it's something to do with the compositor. Unity works perfectly smoothly. Memory usage is only at about 500-600MB, out of 3GB, even with a few things open. The graphics card is the on-board Intel graphics on the Core i5 M450. Does anyone have any ideas what might be causing this problem? Thanks

    Read the article

  • This Week in Geek History: YouTube goes Public, Blu-ray vs. HD DVD, and All Your Base Are Belong To Us

    - by Jason Fitzpatrick
    Every week we bring you a snapshot of the current week in the history of technological and geeky endeavors. This week we’re taking a look at the birth of YouTube, the death of the HD DVD format, and the first mega meme. Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • Need to adjust touchpad edge scroll area

    - by MikeVB
    I have an Acer Aspire that I have dual booting XP and Ubuntu. On the Windows side, the driver allows you to set how close your finger has to be to the edge of the touchpad before it goes into scroll mode. In Ubuntu I don't have the option (at least in the GUI) to change the scroll area on the pad. Is there a conf file or other way to change this? I'm constantly getting into the scroll area during normal usage. I would like to leave it on without losing so much pad area to the scroll feature. Thanks a lot.

    Read the article

  • Microsoft joins the rest of us...and counts down to the end of IE6

    - by brian_ritchie
    Microsoft launched a website dedicated to the demise of IE6.  Here's their pitch...10 years ago a browser was born.  Its name was Internet Explorer 6. Now that we’re in 2011, in an era of modern web standards, it’s time to say goodbye. We'll watch Internet Explorer 6 usage drop to less than 1% worldwide, so more websites can choose to drop support for Internet Explorer 6, saving hours of work for web developers.  Thanks Microsoft!  We've been waiting for this day to come for a long time.  Of course, it would have been nice if IE6 was gone 5 years ago...but who's counting.

    Read the article

  • Is there any good reason I would want my website to be framed?

    - by minitech
    I'm building a website that's not security-critical in any way at all, so having somebody put a page in an <iframe> is not particularly dangerous to its users. However, as my website doesn't have script plugins that will be used anywhere else, is there any reason why I shouldn't just apply: X-Frame-Options: Deny to every page on my website? Is there any valid reason for any other website to embed mine? I've seen plenty of content-stealing ones and attempts to hijack user accounts, but never an actual good usage of frames that's not an explicit feature of the website.

    Read the article

  • Juju LXC configuration

    - by Preethi
    I've looked at this post (http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage) for setting up juju on a local environment with lxc. However, is there a way to use juju with lxc in a non-local environment? I am looking at a scenario where lxc containers are deployed on multiple nodes. I.e., lets say I have virtual machines m1 and m2 with wordpress deployed on a container in m1 and mysql deployed on a container in m2. Is there a way to orchestrate these deployments with juju?

    Read the article

  • How can I solve this SAT direct corner intersection edge case?

    - by ssb
    I have a working SAT implementation, but I am running into a problem where direct collisions at a corner do not work for tiled surfaces. That is, it clips on the surface when going in a certain direction because it gets hung up on one of the tiles, and so, for example, if I walk across a floor while holding both down and left, the player will stop when meeting the next shape because the player will be colliding with the right side rather than with the top of the floor tile. This illustration shows what I mean: The top block will translate right first and then up. I have checked here and here which are helpful, but this does not address what I should do in a situation where I don't have a tile-based world. My usage of the term "tile" before isn't really accurate since what I'm doing here is manually placing square obstacles next to each other, not assigning them spots on a grid. What can I do to fix this?

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >