Search Results

Search found 8048 results on 322 pages for 'partial upgrade'.

Page 236/322 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • ANTS Memory Profiler 7.0 Review

    - by Michael B. McLaughlin
    (This is my first review as a part of the GeeksWithBlogs.net Influencers program. It’s a program in which I (and the others who have been selected for it) get the opportunity to check out new products and services and write reviews about them. We don’t get paid for this, but we do generally get to keep a copy of the software or retain an account for some period of time on the service that we review. In this case I received a copy of Red Gate Software’s ANTS Memory Profiler 7.0, which was released in January. I don’t have any upgrade rights nor is my review guided, restrained, influenced, or otherwise controlled by Red Gate or anyone else. But I do get to keep the software license. I will always be clear about what I received whenever I do a review – I leave it up to you to decide whether you believe I can be objective. I believe I can be. If I used something and really didn’t like it, keeping a copy of it wouldn’t be worth anything to me. In that case though, I would simply uninstall/deactivate/whatever the software or service and tell the company what I didn’t like about it so they could (hopefully) make it better in the future. I don’t think it’d be polite to write up a terrible review, nor do I think it would be a particularly good use of my time. There are people who get paid for a living to review things, so I leave it to them to tell you what they think is bad and why. I’ll only spend my time telling you about things I think are good.) Overview of Common .NET Memory Problems When coming to land of managed memory from the wilds of unmanaged code, it’s easy to say to one’s self, “Wow! Now I never have to worry about memory problems again!” But this simply isn’t true. Managed code environments, such as .NET, make many, many things easier. You will never have to worry about memory corruption due to a bad pointer, for example (unless you’re working with unsafe code, of course). But managed code has its own set of memory concerns. For example, failing to unsubscribe from events when you are done with them leaves the publisher of an event with a reference to the subscriber. If you eliminate all your own references to the subscriber, then that memory is effectively lost since the GC won’t delete it because of the publishing object’s reference. When the publishing object itself becomes subject to garbage collection then you’ll get that memory back finally, but that could take a very long time depending of the life of the publisher. Another common source of resource leaks is failing to properly release unmanaged resources. When writing a class that contains members that hold unmanaged resources (e.g. any of the Stream-derived classes, IsolatedStorageFile, most classes ending in “Reader” or “Writer”), you should always implement IDisposable, making sure to use a properly written Dispose method. And when you are using an instance of a class that implements IDisposable, you should always make sure to use a 'using' statement in order to ensure that the object’s unmanaged resources are disposed of properly. (A ‘using’ statement is a nicer, cleaner looking, and easier to use version of a try-finally block. The compiler actually translates it as though it were a try-finally block. Note that Code Analysis warning 2202 (CA2202) will often be triggered by nested using blocks. A properly written dispose method ensures that it only runs once such that calling dispose multiple times should not be a problem. Nonetheless, CA2202 exists and if you want to avoid triggering it then you should write your code such that only the innermost IDisposable object uses a ‘using’ statement, with any outer code making use of appropriate try-finally blocks instead). Then, of course, there are situations where you are operating in a memory-constrained environment or else you want to limit or even eliminate allocations within a certain part of your program (e.g. within the main game loop of an XNA game) in order to avoid having the GC run. On the Xbox 360 and Windows Phone 7, for example, for every 1 MB of heap allocations you make, the GC runs; the added time of a GC collection can cause a game to drop frames or run slowly thereby making it look bad. Eliminating allocations (or else minimizing them and calling an explicit Collect at an appropriate time) is a common way of avoiding this (the other way is to simplify your heap so that the GC’s latency is low enough not to cause performance issues). ANTS Memory Profiler 7.0 When the opportunity to review Red Gate’s recently released ANTS Memory Profiler 7.0 arose, I jumped at it. In order to review it, I was given a free copy (which does not include upgrade rights for future versions) which I am allowed to keep. For those of you who are familiar with ANTS Memory Profiler, you can find a list of new features and enhancements here. If you are an experienced .NET developer who is familiar with .NET memory management issues, ANTS Memory Profiler is great. More importantly still, if you are new to .NET development or you have no experience or limited experience with memory profiling, ANTS Memory Profiler is awesome. From the very beginning, it guides you through the process of memory profiling. If you’re experienced and just want dive in however, it doesn’t get in your way. The help items GAHSFLASHDAJLDJA are well designed and located right next to the UI controls so that they are easy to find without being intrusive. When you first launch it, it presents you with a “Getting Started” screen that contains links to “Memory profiling video tutorials”, “Strategies for memory profiling”, and the “ANTS Memory Profiler forum”. I’m normally the kind of person who looks at a screen like that only to find the “Don’t show this again” checkbox. Since I was doing a review, though, I decided I should examine them. I was pleasantly surprised. The overview video clocks in at three minutes and fifty seconds. It begins by showing you how to get started profiling an application. It explains that profiling is done by taking memory snapshots periodically while your program is running and then comparing them. ANTS Memory Profiler (I’m just going to call it “ANTS MP” from here) analyzes these snapshots in the background while your application is running. It briefly mentions a new feature in Version 7, a new API that give you the ability to trigger snapshots from within your application’s source code (more about this below). You can also, and this is the more common way you would do it, take a memory snapshot at any time from within the ANTS MP window by clicking the “Take Memory Snapshot” button in the upper right corner. The overview video goes on to demonstrate a basic profiling session on an application that pulls information from a database and displays it. It shows how to switch which snapshots you are comparing, explains the different sections of the Summary view and what they are showing, and proceeds to show you how to investigate memory problems using the “Instance Categorizer” to track the path from an object (or set of objects) to the GC’s root in order to find what things along the path are holding a reference to it/them. For a set of objects, you can then click on it and get the “Instance List” view. This displays all of the individual objects (including their individual sizes, values, etc.) of that type which share the same path to the GC root. You can then click on one of the objects to generate an “Instance Retention Graph” view. This lets you track directly up to see the reference chain for that individual object. In the overview video, it turned out that there was an event handler which was holding on to a reference, thereby keeping a large number of strings that should have been freed in memory. Lastly the video shows the “Class List” view, which lets you dig in deeply to find problems that might not have been clear when following the previous workflow. Once you have at least one memory snapshot you can begin analyzing. The main interface is in the “Analysis” tab. You can also switch to the “Session Overview” tab, which gives you several bar charts highlighting basic memory data about the snapshots you’ve taken. If you hover over the individual bars (and the individual colors in bars that have more than one), you will see a detailed text description of what the bar is representing visually. The Session Overview is good for a quick summary of memory usage and information about the different heaps. You are going to spend most of your time in the Analysis tab, but it’s good to remember that the Session Overview is there to give you some quick feedback on basic memory usage stats. As described above in the summary of the overview video, there is a certain natural workflow to the Analysis tab. You’ll spin up your application and take some snapshots at various times such as before and after clicking a button to open a window or before and after closing a window. Taking these snapshots lets you examine what is happening with memory. You would normally expect that a lot of memory would be freed up when closing a window or exiting a document. By taking snapshots before and after performing an action like that you can see whether or not the memory is really being freed. If you already know an area that’s giving you trouble, you can run your application just like normal until just before getting to that part and then you can take a few strategic snapshots that should help you pin down the problem. Something the overview didn’t go into is how to use the “Filters” section at the bottom of ANTS MP together with the Class List view in order to narrow things down. The video tutorials page has a nice 3 minute intro video called “How to use the filters”. It’s a nice introduction and covers some of the basics. I’m going to cover a bit more because I think they’re a really neat, really helpful feature. Large programs can bring up thousands of classes. Even simple programs can instantiate far more classes than you might realize. In a basic .NET 4 WPF application for example (and when I say basic, I mean just MainWindow.xaml with a button added to it), the unfiltered Class List view will have in excess of 1000 classes (my simple test app had anywhere from 1066 to 1148 classes depending on which snapshot I was using as the “Current” snapshot). This is amazing in some ways as it shows you how in stark detail just how immensely powerful the WPF framework is. But hunting through 1100 classes isn’t productive, no matter how cool it is that there are that many classes instantiated and doing all sorts of awesome things. Let’s say you wanted to examine just the classes your application contains source code for (in my simple example, that would be the MainWindow and App). Under “Basic Filters”, click on “Classes with source” under “Show only…”. Voilà. Down from 1070 classes in the snapshot I was using as “Current” to 2 classes. If you then click on a class’s name, it will show you (to the right of the class name) two little icon buttons. Hover over them and you will see that you can click one to view the Instance Categorizer for the class and another to view the Instance List for the class. You can also show classes based on which heap they live on. If you chose both a Baseline snapshot and a Current snapshot then you can use the “Comparing snapshots” filters to show only: “New objects”; “Surviving objects”; “Survivors in growing classes”; or “Zombie objects” (if you aren’t sure what one of these means, you can click the helpful “?” in a green circle icon to bring up a popup that explains them and provides context). Remember that your selection(s) under the “Show only…” heading will still apply, so you should update those selections to make sure you are seeing the view you want. There are also links under the “What is my memory problem?” heading that can help you diagnose the problems you are seeing including one for “I don’t know which kind I have” for situations where you know generally that your application has some problems but aren’t sure what the behavior you have been seeing (OutOfMemoryExceptions, continually growing memory usage, larger memory use than expected at certain points in the program). The Basic Filters are not the only filters there are. “Filter by Object Type” gives you the ability to filter by: “Objects that are disposable”; “Objects that are/are not disposed”; “Objects that are/are not GC roots” (GC roots are things like static variables); and “Objects that implement _______”. “Objects that implement” is particularly neat. Once you check the box, you can then add one or more classes and interfaces that an object must implement in order to survive the filtering. Lastly there is “Filter by Reference”, which gives you the option to pare down the list based on whether an object is “Kept in memory exclusively by” a particular item, a class/interface, or a namespace; whether an object is “Referenced by” one or more of those choices; and whether an object is “Never referenced by” one or more of those choices. Remember that filtering is cumulative, so anything you had set in one of the filter sections still remains in effect unless and until you go back and change it. There’s quite a bit more to ANTS MP – it’s a very full featured product – but I think I touched on all of the most significant pieces. You can use it to debug: a .NET executable; an ASP.NET web application (running on IIS); an ASP.NET web application (running on Visual Studio’s built-in web development server); a Silverlight 4 browser application; a Windows service; a COM+ server; and even something called an XBAP (local XAML browser application). You can also attach to a .NET 4 process to profile an application that’s already running. The startup screen also has a large number of “Charting Options” that let you adjust which statistics ANTS MP should collect. The default selection is a good, minimal set. It’s worth your time to browse through the charting options to examine other statistics that may also help you diagnose a particular problem. The more statistics ANTS MP collects, the longer it will take to collect statistics. So just turning everything on is probably a bad idea. But the option to selectively add in additional performance counters from the extensive list could be a very helpful thing for your memory profiling as it lets you see additional data that might provide clues about a particular problem that has been bothering you. ANTS MP integrates very nicely with all versions of Visual Studio that support plugins (i.e. all of the non-Express versions). Just note that if you choose “Profile Memory” from the “ANTS” menu that it will launch profiling for whichever project you have set as the Startup project. One quick tip from my experience so far using ANTS MP: if you want to properly understand your memory usage in an application you’ve written, first create an “empty” version of the type of project you are going to profile (a WPF application, an XNA game, etc.) and do a quick profiling session on that so that you know the baseline memory usage of the framework itself. By “empty” I mean just create a new project of that type in Visual Studio then compile it and run it with profiling – don’t do anything special or add in anything (except perhaps for any external libraries you’re planning to use). The first thing I tried ANTS MP out on was a demo XNA project of an editor that I’ve been working on for quite some time that involves a custom extension to XNA’s content pipeline. The first time I ran it and saw the unmanaged memory usage I was convinced I had some horrible bug that was creating extra copies of texture data (the demo project didn’t have a lot of texture data so when I saw a lot of unmanaged memory I instantly figured I was doing something wrong). Then I thought to run an empty project through and when I saw that the amount of unmanaged memory was virtually identical, it dawned on me that the CLR itself sits in unmanaged memory and that (thankfully) there was nothing wrong with my code! Quite a relief. Earlier, when discussing the overview video, I mentioned the API that lets you take snapshots from within your application. I gave it a quick trial and it’s very easy to integrate and make use of and is a really nice addition (especially for projects where you want to know what, if any, allocations there are in a specific, complicated section of code). The only concern I had was that if I hadn’t watched the overview video I might never have known it existed. Even then it took me five minutes of hunting around Red Gate’s website before I found the “Taking snapshots from your code" article that explains what DLL you need to add as a reference and what method of what class you should call in order to take an automatic snapshot (including the helpful warning to wrap it in a try-catch block since, under certain circumstances, it can raise an exception, such as trying to call it more than 5 times in 30 seconds. The difficulty in discovering and then finding information about the automatic snapshots API was one thing I thought could use improvement. Another thing I think would make it even better would be local copies of the webpages it links to. Although I’m generally always connected to the internet, I imagine there are more than a few developers who aren’t or who are behind very restrictive firewalls. For them (and for me, too, if my internet connection happens to be down), it would be nice to have those documents installed locally or to have the option to download an additional “documentation” package that would add local copies. Another thing that I wish could be easier to manage is the Filters area. Finding and setting individual filters is very easy as is understanding what those filter do. And breaking it up into three sections (basic, by object, and by reference) makes sense. But I could easily see myself running a long profiling session and forgetting that I had set some filter a long while earlier in a different filter section and then spending quite a bit of time trying to figure out why some problem that was clearly visible in the data wasn’t showing up in, e.g. the instance list before remembering to check all the filters for that one setting that was only culling a few things from view. Some sort of indicator icon next to the filter section names that appears you have at least one filter set in that area would be a nice visual clue to remind me that “oh yeah, I told it to only show objects on the Gen 2 heap! That’s why I’m not seeing those instances of the SuperMagic class!” Something that would be nice (but that Red Gate cannot really do anything about) would be if this could be used in Windows Phone 7 development. If Microsoft and Red Gate could work together to make this happen (even if just on the WP7 emulator), that would be amazing. Especially given the memory constraints that apps and games running on mobile devices need to work within, a good memory profiler would be a phenomenally helpful tool. If anyone at Microsoft reads this, it’d be really great if you could make something like that happen. Perhaps even a (subsidized) custom version just for WP7 development. (For XNA games, of course, you can create a Windows version of the game and use ANTS MP on the Windows version in order to get a better picture of your memory situation. For Silverlight on WP7, though, there’s quite a bit of educated guess work and WeakReference creation followed by forced collections in order to find the source of a memory problem.) The only other thing I found myself wanting was a “Back” button. Between my Windows Phone 7, Zune, and other things, I’ve grown very used to having a “back stack” that lets me just navigate back to where I came from. The ANTS MP interface is surprisingly easy to use given how much it lets you do, and once you start using it for any amount of time, you learn all of the different areas such that you know where to go. And it does remember the state of the areas you were previously in, of course. So if you go to, e.g., the Instance Retention Graph from the Class List and then return back to the Class List, it will remember which class you had selected and all that other state information. Still, a “Back” button would be a welcome addition to a future release. Bottom Line ANTS Memory Profiler is not an inexpensive tool. But my time is valuable. I can easily see ANTS MP saving me enough time tracking down memory problems to justify it on a cost basis. More importantly to me, knowing what is happening memory-wise in my programs and having the confidence that my code doesn’t have any hidden time bombs in it that will cause it to OOM if I leave it running for longer than I do when I spin it up real quickly for debugging or just to see how a new feature looks and feels is a good feeling. It’s a feeling that I like having and want to continue to have. I got the current version for free in order to review it. Having done so, I’ve now added it to my must-have tools and will gladly lay out the money for the next version when it comes out. It has a 14 day free trial, so if you aren’t sure if it’s right for you or if you think it seems interesting but aren’t really sure if it’s worth shelling out the money for it, give it a try.

    Read the article

  • Package system broken - E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by delha
    After installing some packages and libraries I have an error on Package Manager, I can't run any update because it says: "The package system is broken If you are using third party repositories then disable them, since they are a common source of problems. Now run the following command in a terminal: apt-get install -f " I've tried to do what it says and it returns me: jara@jara-Aspire-5738:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libcaca-dev libopencv2.3-bin nite-dev python-bluez ps-engine libslang2-dev python-sphinx ros-electric-geometry-tutorials ros-electric-geometry-visualization python-matplotlib libzzip-dev ros-electric-orocos-kinematics-dynamics ros-electric-physics-ode libbluetooth-dev libaudiofile-dev libassimp2 libnetpbm10-dev ros-electric-laser-pipeline python-epydoc ros-electric-geometry-experimental libasound2-dev evtest python-matplotlib-data libyaml-dev ros-electric-bullet ros-electric-executive-smach ros-electric-documentation libgl2ps0 libncurses5-dev ros-electric-robot-model texlive-fonts-recommended python-lxml libwxgtk2.8-dev daemontools libxxf86vm-dev libqhull-dev libavahi-client-dev ros-electric-geometry libgl2ps-dev libcurl4-openssl-dev assimp-dev libusb-1.0-0-dev libopencv2.3 ros-electric-diagnostics-monitors libsdl1.2-dev libjs-underscore libsdl-image1.2 tipa libusb-dev libtinfo-dev python-tz python-sip libfltk1.1 libesd0 libfreeimage-dev ros-electric-visualization x11proto-xf86vidmode-dev python-docutils libvtk5.6 ros-electric-assimp x11proto-scrnsaver-dev libnetcdf-dev libidn11-dev libeigen3-dev joystick libhdf5-serial-1.8.4 ros-electric-joystick-drivers texlive-fonts-recommended-doc esound-common libesd0-dev tcl8.5-dev ros-electric-multimaster-experimental ros-electric-rx libaudio-dev ros-electric-ros-tutorials libwxbase2.8-dev ros-electric-visualization-common python-sip-dev ros-electric-visualization-tutorials libfltk1.1-dev libpulse-dev libnetpbm10 python-markupsafe openni-dev tk8.5-dev wx2.8-headers freeglut3-dev libavahi-common-dev python-roman python-jinja2 ros-electric-robot-model-visualization libxss-dev libqhull5 libaa1-dev ros-electric-eigen freeglut3 ros-electric-executive-smach-visualization ros-electric-common-tutorials ros-electric-robot-model-tutorials libnetcdf6 libjs-sphinxdoc python-pyparsing libaudiofile0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libcv-dev The following NEW packages will be installed libcv-dev 0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded. 2 not fully installed or removed. Need to get 0 B/3,114 kB of archives. After this operation, 11.1 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 261801 files and directories currently installed.) Unpacking libcv-dev (from .../libcv-dev_2.1.0-7build1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/libcv-dev_2.1.0-7build1_amd64.deb (-- unpack): trying to overwrite '/usr/bin/opencv_haartraining', which is also in package libopencv2.3-bin 2.3.1+svn6514+branch23-12~oneiric dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libcv-dev_2.1.0-7build1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried everything people recommend on internet like: sudo apt-get clean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install Also I've tried to install the synaptic manager but it doesn't let me install anything.. As you can see nothing works so I'm desperate! I'm using ubuntu 11.10, 64 bits Thanks!!

    Read the article

  • Using SQL Source Control and Vault Professional Part 4

    - by Ajarn Mark Caldwell
    Two weeks ago I upgraded our installation of Fortress to the latest version, which is now named Vault Professional.  This is the version of Vault (i.e. Vault Standard 5.1 / Vault Professional 5.1) that will be officially supported with Red-Gate SQL Source Control 2.1.  While the folks at Red-Gate did a fantastic job of working with me to get SQL Source Control to work with the older Fortress version, we weren’t going to just sit on that.  There are a couple of things that Vault Professional cleaned up for us, such as improved integration with Visual Studio 2010, so it was a win all around. Shortly after that upgrade, I received notice from Red-Gate that they had a new Early Access version of SQL Source Control available that included the ability to source control static data.  The idea here is that you probably have a few fairly static lookup tables in your system, and those data values are similar in concept to source code, and should be versioned in your source control management system also.  I agree with this, but please be wise…somebody out there is bound to try to use this feature as their disaster recovery for their entire database, and that is NOT the purpose.  First off, you should never have your PROD (or LIVE, whatever you call it) system attached to source control.  Source Control is for development, not for PROD systems.  Second, use the features that are intended for this purpose, such as BACKUP and RESTORE. Laying that tangent aside, it is great that now you can include these critical values in your repository and make them part of a deployment process.  As you would guess, SQL Source Control uses SQL Data Compare to create the data change scripts just like it uses SQL Compare to create the schema change scripts.  Once again, they did a very good job with the integration to their other products.  At this point we are really starting to see some good payback on our investment in the full SQL Developer Bundle.  Those products were worth the investment back when we only used them sporadically for troubleshooting and DBA analysis, but now with SQL Source Control, they are becoming everyday-use products for the development team. I like this software (SQL Source Control) so much that I am about to break my own rules and distribute it to my team to use even though it is still in beta.  This is the first time that I have approved the use of any beta software in a production scenario (actively building our next versions of internal software) but I predict that the usability and productivity gain of using SQL Source Control over manual scripting is worth the risk.  Of course, I have also put this beta software through its paces pretty well to be comfortable with it, and Red-Gate has proven their responsiveness to issues that came up in my early beta testing, and so I am willing to bet on their continued support.  Likewise, SourceGear, the maker of Vault Professional, has proven itself to me as well, and so the combination of SQL Source Control with Vault Professional is the new standard for my development team.

    Read the article

  • Cannot install Acrobat Reader on Ubuntu 12.04.1

    - by user91137
    I have been trying to install Acrobat Reader on my Ubuntu 12.04.1. In the be;ggining, I tried to install it from the software-center, but it crashes with the report: (Reading database ... 189311 files and directories currently installed.) Unpacking acroread (from .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): trying to overwrite '/usr/bin/acroread', which is also in package adobereader-ptb 8.1.7-2 No apport report written because MaxReports is reached already Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb" As as solution, I tried to install it via terminal, with the $sudo apt-get install acroread and receive the following: arcanjo@arcanjo:~$ sudo apt-get install acroread Lendo listas de pacotes... Pronto Construindo árvore de dependências Lendo informação de estado... Pronto Pacotes sugeridos: libldap2 libgnome-speech7 Os NOVOS pacotes a seguir serão instalados: acroread 0 pacotes atualizados, 1 pacotes novos instalados, 0 a serem removidos e 0 não atualizados. É preciso baixar 60,1 MB de arquivos. Depois desta operação, 142 MB adicionais de espaço em disco serão usados. Obter:1 http://archive.canonical.com/ubuntu/ precise/partner acroread i386 9.5.1-1precise1 [60,1 MB] Baixados 60,1 MB em 4min 17s (234 kB/s) (Lendo banco de dados ... 189311 ficheiros e directórios actualmente instalados.) Desempacotando acroread (de .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: erro processando /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): a tentar sobre-escrever '/usr/bin/acroread', que também está no pacote adobereader-ptb 8.1.7-2 Nenhum relatório apport escrito pois MaxReports já foi atingido Processando gatilhos para desktop-file-utils ... Processando gatilhos para bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processando gatilhos para gnome-menus ... Processando gatilhos para man-db ... Erros foram encontrados durante o processamento de: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) arcanjo@arcanjo:~$ I've already tried to upgrade and update the apt-get, also tried to remove and re-install the software-center, tried deleting the "problematic" files and re-updating the apt-get... Nothing seems to work... Any solutions?

    Read the article

  • Having fun with Reflection

    - by Nick Harrison
    I was once asked in a technical interview what I could tell them about Reflection.   My response, while a little tongue in cheek was that "I can tell you it is one of my favorite topics to talk about" I did get a laugh out of that and it was a great ice breaker.    Reflection may not be the answer for everything, but it often can be, or maybe even should be.     I have posted in the past about my favorite CopyTo method.   It can come in several forms and is often very useful.   I explain it further and expand on the basic idea here  The basic idea is to allow reflection to loop through the properties of two objects and synchronize the ones that are in common.   I love this approach for data binding and passing data across the layers in an application. Recently I have been working on a project leveraging Data Transfer Objects to pass data through WCF calls.   We won't go into how the architecture got this way, but in essence there is a partial duplicate inheritance hierarchy where there is a related Domain Object for each Data Transfer Object.     The matching objects do not share a common ancestor or common interface but they will have the same properties in common.    By passing the problems with this approach, let's talk about how Reflection and our friendly CopyTo could make the most of this bad situation without having to change too much. One of the problems is keeping the two sets of objects in synch.   For this particular project, the DO has all of the functionality and the DTO is used to simply transfer data back and forth.    Both sets of object have parallel hierarchies with the same properties being defined at the corresponding levels.   So we end with BaseDO,  BaseDTO, GenericDO, GenericDTO, ProcessAreaDO,  ProcessAreaDTO, SpecializedProcessAreaDO, SpecializedProcessAreaDTO, TableDo, TableDto. and so on. Without using Reflection and a CopyTo function, tremendous care and effort must be made to keep the corresponding objects in synch.    New properties can be added at any level in the inheritance and must be kept in synch at all subsequent layers.    For this project we have come up with a clever approach of calling a base GetDo or UpdateDto making sure that the same method at each level of inheritance is called.    Each level is responsible for updating the properties at that level. This is a lot of work and not keeping it in synch can create all manner of problems some of which are very difficult to track down.    The other problem is the type of code that this methods tend to wind up with. You end up with code like this: Transferable dto = new Transferable(); base.GetDto(dto); dto.OfficeCode = GetDtoNullSafe(officeCode); dto.AccessIndicator = GetDtoNullSafe(accessIndicator); dto.CaseStatus = GetDtoNullSafe(caseStatus); dto.CaseStatusReason = GetDtoNullSafe(caseStatusReason); dto.LevelOfService = GetDtoNullSafe(levelOfService); dto.ReferralComments = referralComments; dto.Designation = GetDtoNullSafe(designation); dto.IsGoodCauseClaimed = GetDtoNullSafe(isGoodCauseClaimed); dto.GoodCauseClaimDate = goodCauseClaimDate;       One obvious problem is that this is tedious code.   It is error prone code.    Adding helper functions like GetDtoNullSafe help out immensely, but there is still an easier way. We can bypass the tedious code, by pass the complex inheritance tricks, and reduce all of this to a single method in the base class. TransferObject dto = new TransferObject(); CopyTo (this, dto); return dto; In the case of this one project, such a change eliminated the need for 20% of the total code base and a whole class of unit test cases that made sure that all of the properties were in synch. The impact of such a change also needs to include the on going time savings and the improvements in quality that can arise from them.    Developers who are not worried about keeping the properties in synch across mirrored object hierarchies are freed to worry about more important things like implementing business requirements.

    Read the article

  • nfs-kernel-server installation : file does not exist

    - by Stuti Rastogi
    I am extremely new to Ubuntu and need to work on EdX platform. I need to install the NFS Client on Ubuntu 12.04 for the same. I used the following stuti@stuti:/$ sudo apt-get install nfs-kernel-server However this gives me an error as follows: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: nfs-common The following NEW packages will be installed: nfs-common nfs-kernel-server 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/355 kB of archives. After this operation, 1,222 kB of additional disk space will be used. Do you want to continue [Y/n]? y Selecting previously unselected package nfs-common. (Reading database ... 200367 files and directories currently installed.) Unpacking nfs-common (from .../nfs-common_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Selecting previously unselected package nfs-kernel-server. Unpacking nfs-kernel-server (from .../nfs-kernel-server_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up nfs-common (1:1.2.5-3ubuntu3.1) ... statd start/running, process 4574 gssd stop/pre-start, process 4603 idmapd start/running, process 4643 Setting up nfs-kernel-server (1:1.2.5-3ubuntu3.1) ... update-rc.d: /etc/init.d/nfs-kernel-server: file does not exist dpkg: error processing nfs-kernel-server (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: nfs-kernel-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried sudo apt-get autoremove nfs-kernel-server sudo apt-get autoremove nfs-common After these, I tried to install but I keep getting the same error. apt-get update or upgrade also do not help and give the same error. I am clueless as to where can I find this missing file as stated in the output. I tried to google about this problem but none of the solutions I came across have helped or I have not been able to understand some of them. Any help would really be appreciated. Thanks in advance for your time and attention.

    Read the article

  • I am receiving a message saying I have duplicate sources but I can't seem to find a duplicate of the line described, any ideas?

    - by David Griffiths
    I receive this meassage when I run sudo apt-get update in the terminal:- Duplicate sources.list entry http://archive.canonical.com/ubuntu/ precise/partner i386 Packages (/var/lib/apt/lists/archive.canonical.com_ubuntu_dists_precise_partner_binary-i386_Packages) So i ran the command gksu gedit /etc/apt/sources.list and checked the source to find there was no duplicate, not that I can see anyway. Here is the source:- # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release i386 (20120423)]/ precise main restricted deb-src http://archive.ubuntu.com/ubuntu precise main restricted #Added by software-properties # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://gb.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ precise restricted main multiverse universe #Added by software-properties ## Major bug fix updates produced after the final release of the ## distribution. deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-updates restricted main multiverse universe #Added by software-properties ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ precise universe deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://gb.archive.ubuntu.com/ubuntu/ precise multiverse deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse #Added by software-properties deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security restricted main multiverse universe #Added by software-properties deb http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu precise main # deb-src http://extras.ubuntu.com/ubuntu precise main deb http://repository.spotify.com stable non-free I can see there are two lines of deb http://archive.canonical.com/ubuntu precise partner but one has #deb-src at the beginning of it. Hashed out no? I'm quite new to linux OS and have little to none sourced editing skills so any help would be most appreciated. Thank you:)

    Read the article

  • How to Add Control Panel to “My Computer” in Windows 7 or Vista

    - by The Geek
    Back in the Windows XP days, you could easily add Control Panel to My Computer with a simple checkbox in the folder view settings. Windows 7 and Vista don’t make this quite as easy, but there’s still a way to get it back. To make this tweak, we’ll be doing a quick registry hack, but there’s a downloadable version provided as well. Manual Registry Tweak to Add Control Panel Open up regedit.exe through the start menu search or run box, and then browse down to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace Now that you’re there, you’ll need to right-click and create a new key… If you want to add the regular Control Panel view, with the categories, you’ll need to use one GUID as the name of the key. If you want the icon view instead, you can use the other key. Here they are: Category View:  {26EE0668-A00A-44D7-9371-BEB064C98683} Icon View: {21EC2020-3AEA-1069-A2DD-08002B30309D} Once you’re done, it should look like this: Now over in the Computer view, just hit the F5 key to refresh the panel, and you should see the new icon pop up in the list: Now when you click on the icon you’ll be taken to Control Panel. If you didn’t know how to change the view before, you can use the drop-down box on the right-hand side to switch between Category and icon view. Downloadable Registry Hack Rather than deal with manual registry editing, you can simply download the file, extract it, and then either double-click on the AddCategoryControlPanel.reg to add the Category view icon, or AddIconControlPanel.reg to add the other icon. There’s an uninstall script provided for each. Download ControlPanelMyComputer Registry Hack from howtogeek.com Similar Articles Productive Geek Tips Disable User Account Control (UAC) the Easy Way on Win 7 or VistaHow To Figure Out Your PC’s Host Name From the Command PromptRestore Missing Desktop Icons in Windows 7 or VistaNew Vista Syntax for Opening Control Panel Items from the Command-lineAdd Registry Editor to Control Panel TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • Employee Info Starter Kit: Project Mission

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is an open source ASP.NET project template that is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, it illustrates how to utilize Microsoft ASP.NET 4.0, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule. where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. User Stories The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee record Update an existing employee record Delete existing employee records Key Technology Areas ASP.NET 4.0 Entity Framework 4.0 T-4 Template Visual Studio 2010 Architectural Objective There is no universal architecture which can be considered as the best for all sorts of applications around the world. Based on requirements, constraints, environment, application architecture can differ from one to another. Trade-off factors are one of the important considerations while deciding a particular architectural solution. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. “Productivity” as the architectural objective typically also includes other trade-off factors as well as, such as testability, flexibility, performance etc. Fortunately Microsoft .NET Framework 4.0 and Visual Studio 2010 includes lots of great features that have been implemented cleverly in this project to reduce these trade-off factors in the minimum level. Why Employee Info Starter Kit is Not a Framework? Application frameworks are really great for productivity, some of which are really unavoidable in this modern age. However relying too many frameworks may overkill a project, as frameworks are typically designed to serve wide range of different usage and are less customizable or editable. On the other hand having implementation patterns can be useful for developers, as it enables them to adjust application on demand. Employee Info Starter Kit provides hundreds of “connected” snippets and implementation patterns to demonstrate problem solutions in actual production environment. It also includes Visual Studio T-4 templates that generate thousands lines of data access and business logic layer repetitive codes in literally few seconds on the fly, which are fully mock testable due to language support for partial methods and latest support for mock testing in Entity Framework. Why Employee Info Starter Kit is Different than Other Open-source Web Applications? Software development is one of the rapid growing industries around the globe, where the technology is being updated very frequently to adapt greater challenges over time. There are literally thousands of community web sites, blogs and forums that are dedicated to provide support to adapt new technologies. While some are really great to enable learning new technologies quickly, in most cases they are either too “simple and brief” to be used in real world scenarios or too “complex and detailed” which are typically focused to achieve a product goal (such as CMS, e-Commerce etc) from "end user" perspective and have a long duration learning curve with respect to the corresponding technology. Employee Info Starter Kit, as a web project, is basically "developer" oriented which actually considers a hybrid approach as “simple and detailed”, where a simple domain has been considered to intentionally illustrate most of the architectural and implementation challenges faced by web application developers so that anyone can dive into deep into the corresponding new technology or concept quickly. Roadmap Since its first release by 2008 in MSDN Code Gallery, Employee Info Starter Kit gained a huge popularity in ASP.NET community and had 1, 50,000+ downloads afterwards. Being encouraged with this great response, we have a strong commitment for the community to provide support for it with respect to latest technologies continuously. Currently hosted in Codeplex, this community driven project is planned to have a wide range of individual editions, each of which will be focused on a selected application architecture, framework or platform, such as ASP.NET Webform, ASP.NET Dynamic Data, ASP.NET MVC, jQuery Ajax (RIA), Silverlight (RIA), Azure Service Platform (Cloud), Visual Studio Automated Test etc. See here for full list of current and future editions.

    Read the article

  • Speed Up the Help Dialog in Windows and Office

    - by Matthew Guay
    When you click help, you don’t want to wait for your computer to bring it to you.  Here’s how you can speed up the help dialog in Windows and Office. If you have a slow internet connection, chances are you’ve been frustrated by the Help dialog in Windows and Office trying to download fresh content every time you open them. This can be great if the updated help files contain better content, but sometimes you just want to find what you were looking for without waiting.  Here’s how you can turn off the automatic online help. Use Local Help in Windows Windows 7 and Vista’s help dialog usually tries to load the latest content from the net, but this can take a long time on slow connections. If you’re seeing the above screen a lot, you may want to switch to offline help.  Click the “Online Help” button at the bottom, and select “Get offline Help”. Now your computer will just load the pre-installed help files.  And don’t worry; if there’s a major update to your help files, Windows will download and install it through Windows Update.   Stupid Geek Tip: An easy way to open Windows Help is to click on your desktop or Start Menu and press F1 on your keyboard. Use Local Help in Office This same trick works in Office 2007 and 2010.  We’ve actually had more problems with Office’s help being tardy. Solve this the same way as with Windows help.  Click on the “Connected to Office.com” or “Connected to Office Online” button, depending on your version of Office, and select “Show content only from this computer”. This will automatically change the settings for Help in all of your Office applications. While this may not be a major trick, it can be helpful especially if you have a slow internet connection and want to get things done quickly.  Similar Articles Productive Geek Tips How to See the About Dialog and Version Information in Office 2007Speed Up SATA Hard Drives in Windows VistaMake Mouse Navigation Faster in WindowsSpeed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • How To show document directory save image in thumbnail in cocos2d class

    - by Anil gupta
    I have just implemented multiple photo selection from iphone photo library and i am saving all selected photo in document directory every time as a array, now i want to show all saved images in my class from document directory as a thumbnail, i have tried some logic but my game getting crashing, My code is below. Any help will be appreciate. Thanks in advance. -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { CCSprite *photoalbumbg = [CCSprite spriteWithFile:@"photoalbumbg.png"]; photoalbumbg.anchorPoint = ccp(0,0); [self addChild:photoalbumbg z:0]; //Background Sound // [[SimpleAudioEngine sharedEngine]playBackgroundMusic:@"Background Music.wav" loop:YES]; CCSprite *photoalbumframe = [CCSprite spriteWithFile:@"photoalbumframe.png"]; photoalbumframe.position = ccp(160,240); [self addChild:photoalbumframe z:2]; CCSprite *frame = [CCSprite spriteWithFile:@"Photo-Frames.png"]; frame.position = ccp(160,270); [self addChild:frame z:1]; /*_____________________________________________________________________________________*/ CCMenuItemImage * upgradebtn = [CCMenuItemImage itemFromNormalImage:@"AlbumUpgrade.png" selectedImage:@"AlbumUpgrade.png" target:self selector:@selector(Upgrade:)]; CCMenu * fMenu = [CCMenu menuWithItems:upgradebtn,nil]; fMenu.position = ccp(200,110); [self addChild:fMenu z:3]; NSError *error; NSFileManager *fM = [NSFileManager defaultManager]; NSString *documentsDirectory = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"]; NSLog(@"Documents directory: %@", [fM contentsOfDirectoryAtPath:documentsDirectory error:&error]); NSArray *allfiles = [fM contentsOfDirectoryAtPath :documentsDirectory error:&error]; directoryList = [[NSMutableArray alloc] init]; for(NSString *file in allfiles) { NSString *path = [documentsDirectory stringByAppendingPathComponent:file]; [directoryList addObject:file]; } NSLog(@"array file name value ==== %@", directoryList); CCSprite *temp = [CCSprite spriteWithFile:[directoryList objectAtIndex:0]]; [temp setTextureRect:CGRectMake(160.0f, 240.0f, 50,50)]; // temp.anchorPoint = ccp(0,0); [self addChild:temp z:10]; for(UIImage *file in directoryList) { // NSData *pngData = [NSData dataWithContentsOfFile:file]; // image = [UIImage imageWithData:pngData]; NSLog(@"uiimage = %@",image); // UIImage *image = [UIImage imageWithContentsOfFile:file]; for (int i=1; i<=3; i++) { for (int j=1;j<=3; j++) { CCTexture2D *tex = [[[CCTexture2D alloc] initWithImage:file] autorelease]; CCSprite *selectedimage = [CCSprite spriteWithTexture:tex rect:CGRectMake(0, 0, 67, 66)]; selectedimage.position = ccp(100*i,350*j); [self addChild:selectedimage]; } } } } return self; }

    Read the article

  • IDC Analyst Report Touts Oracle–Accenture Strategic Initiative

    - by kristin.jellison
    Hi there, partners! Oracle Engineered Systems have been getting some love lately, and we want to share it with you! The market intelligence and advisory firm IDC recently released a report lauding Oracle and Accenture’s strategic initiative to route the performance and flexibility of Oracle Engineered Systems to clients. The report, "Oracle and Accenture Strategic Alliance Places Big Bet on Engineered Systems,” by Steve White, reflects a largely positive analysis of the relationship. White notes that the alliance is “one of the largest in the industry.” Under the relationship, Accenture has incorporated Oracle Engineered Systems—including Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle SuperCluster, and Oracle Exalytics In-Memory Machine—into its leading datacenter transformation consulting services. Together, the two companies have also created bespoke platforms, such as the Accenture Foundation Platform for Oracle, which helps clients accelerate deployments on Oracle Fusion Middleware, running Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine. Oracle Engineered Systems deliver a single, engineered platform—including server to storage and networking. This makes it easier and cheaper for Accenture clients around the world to prepare their datacenters for managing, processing and analyzing the massive amounts of data they (rightly) anticipate seeing in the next decade. The new solutions can help reduce the effort and cost to migrate any vendor database to an Oracle Engineered Systems platform, which can lower the cost of ownership by up to 50 percent. For its part, Accenture has built a team of 300 consultants to implement and increase the flexibility and stability of client datacenters. This move further expands one of the fastest-growing full-service Oracle Enterprise solutions. Over 52,000 Accenture consultants are qualified to implement, upgrade and outsource the Oracle product suite. Accenture is a Diamond-level member of Oracle PartnerNetwork (OPN). For Oracle Partners, this update should give you at least two things to walk away with. First, this initiative is showing signs of success. As Marty Cole, group chief executive for Accenture’s Technology growth platform, put it, “We are seeing an increasing number of clients recognizing the value of consolidating their databases and taking advantage of the cost and performance benefits delivered by these solutions.” The pipeline is there—and not just for Accenture. Use this example to show your clients that investments in Oracle Engineered Systems are on the rise. Second, recognize that Oracle Engineered Systems represent one of the biggest platforms for growth that Oracle has to offer partners. As part of the agreement, Accenture is able to provide: Platform Readiness Assessments Platform Implementation App Rationalization Database Rationalization Managed Services These are all enablement opportunities you can offer customers under Oracle’s partner programs —to continue building the value of their investments, and the value of your relationship with Oracle. Take a read through the IDC report. To learn more about the partnership, see this press release. Happy selling! The OPN Communications Team

    Read the article

  • The best Bar on the globe is ... in Seoul/Korea

    - by Mike Dietrich
    As you know already sometimes I write about things which really don't have to do anything with a database upgrade. So if you are looking for tips and tricks and articles about that topic please stop reading now Actually I'm not a lets-go-to-a-bar person. I enjoy good food and a fine dessert wine afterwards. But last week in Seoul/Korea Ryan, our local host, did ask us after a wonderful dinner at a Korean Barbecue place if we'd like to visit a bar. I was really tired as I flew into Seoul overnight from Sunday to Monday arriving Monday early morning, getting shower, breakfast - and then a full day of very good and productive customer meetings. But one thing Ryan mentioned catched my immediate attention: The owner of the bar collects records and has a huge tube amp stereo system - and you can ask him to play your favorite songs. The bar is called "Peter, Paul and Mary" - honestly not my favorite style of music. And I even coulnd't find a webpage or an address - only that little piece of information on Facebook. But after stepping down the stairs to the cellar my eyes almost poped out of my head. This is the audio system: Enourmus huge corner horn loudspeakers from Western Electric. Pretty old I'd suppose but delivering an incredible present dynamics into the room. And plenty of tube equipment from Jadis, NSA Labs and Shindo Laboratories Western Electric 300B Limited amps from Tokyo. And the owner (I was so amazed I had simply forgotten to ask for his name) collects records since 40 years. And we had many wishes that night. Actually when we did enter Peter, Paul and Mary he played an old Helloween song. That must have been destiny. A German entering a bar in Korea and the owner is playing an old song by one of Germany's best heavy metal bands ever. And it went on with the Doors, Rainbow's Stargazer, Scorpions, later Deep Purple's Perfect Strangers, a bit of Santana, Carly Simon, Jimi Hendrix, David Bowie ...Ronnie James Dio's Holy Diver, Gary Moore, Peter Gabriel's San Jacinto ... and many many more great songs ... Of course we were the last guests leaving the place at 2am in the morning - and I've never ever had a better night in a bar before ... I could have stayed days listening to so many records  ... Thanks Ryan, that was a phantastic night! -Mike

    Read the article

  • Bash arrays and case statements - review my script

    - by Felipe Alvarez
    #!/bin/bash # Change the environment in which you are currently working. # Actually, it calls the relevant 'lettus.sh' script if [ "${BASH_SOURCE[0]}" == "$0" ]; then echo "Try running this as \". chenv $1\"" exit 0 fi usage(){ echo "Usage: . ${PROG} -- Shows a list of user-selectable environments." echo " . ${PROG} [env] -- Select environment." echo " . ${PROG} -h -- Shows this usage screen." return } showEnv(){ # check if index0 exists, assume we have at least the first (zeroth) element #if [ -z "${envList}" ]; then if [ -z "${envList[0]}" ]; then echo "array \$envList is empty! " >&2 return 1 fi # Show all elements in array (0 -> n-1) for i in $(seq 0 $((${#envList[@]} - 1))); do echo ${envList[$i]} done return } setEnv(){ if [ -z "$1" ]; then usage; return fi case $1 in cold) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_cold.sh;; coles) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_coles.sh;; fc) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_fc.sh;; fcrm) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_fcrm.sh;; stable) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_stable.sh;; tip) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_tip.sh;; uat) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_uat.sh;; wellmdc) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_wellmdc.sh;; *) usage; return;; esac if $IS_SOURCED; then echo "Environment \"$1\" selected." echo "Now sourcing file \"$FILE_TO_SOURCE\"..." . ${FILE_TO_SOURCE} return else return 1 fi } main(){ if [ -z "$1" ]; then showEnv; return fi case $1 in -h) usage;; *) setEnv $1;; esac return } PROG="chenv" # create array of user-selectable environments envList=( cold coles fc fcrm stable tip uat wellmdc ) main "$@" return If I could, I'd like to get some feedback on a better way to accomplish any of the following: run through the case statement make script trivally simple to maintain/upgrade/update

    Read the article

  • Documentation in Oracle Retail Merchandising System (RMS) and Oracle Retail Fiscal Management System (ORFM), Release 13.2.4

    - by Oracle Retail Documentation Team
    The Patch Release 13.2.4 of the Oracle Retail Merchandising System (RMS) and its module, Oracle Retail Fiscal Management (ORFM)  is now available from My Oracle Support. End User Documentation Enhancements The following summarize the highlights of changes made to the documentation in conjunction with the new Brazil-related functionality: Foundation chapter in the Oracle Retail Merchandising System (RMS)/Sales Audit (ReSA) Brazil Localization User GuideThis chapter was updated with a non-base Localization Flexible Attribution Solution (LFAS) section that addresses the addition of several new custom attributes to Items and Suppliers through non-base LFAS for Brazil; it also addresses the extension of the Retail Tax Integration Layer (RTIL) through the Oracle Retail Merchandising System (RMS), and Oracle Retail Fiscal Management System (ORFM).  ORFM User GuideThe Purchase Order chapter was updated to include schedule related updates for a Nota Fiscal. The Fiscal Documents chapter was updated to include information on creating a new NF and searching for details using Vendor Product Number. Oracle Retail Fiscal Management/RMS Brazil Localization Implementation GuideThe Implementation Checklist chapter was updated with a note on multi-currency functionality. The Batch Processes chapter was updated with information on the NF EDI batch. The following summarize the highlights of changes made to the documentation in conjunction with the new technical certifications (see the RMS 13.2.4 Release Notes for more information): Installation Guides for RMS and for ORFM/RMS BrazilThese installation guides were updated extensively to account for the multiple technical certification enhancements in 13.2.4. White Paper: How to Upgrade from WebLogic11g 10.3.3 to WebLogic11g 10.3.4  (Doc ID: 1432575.1)See the previous blog entry regarding this new White Paper. New Documents on My Oracle Support for Brazil Localization Overview and Interfaces Tax Vendor Integration (Doc ID: 1424048.1)Oracle chooses to integrate with a third party tax expert to delivery the Brazilian solution. Oracle has built the Retail Tax Integration layer (RTIL) as the key integration component to support the integration of Oracle suite of products with external tax vendors. This paper addresses the RTIL integration interfaces with TaxWeb, providing guidance on the typical integration interfaces and operations that must be supported by other tax solutions in the Brazilian market. Oracle Retail Fiscal Management/RMS Brazil Localization: Localization Flexible Attribute Solution (LFAS) (Doc ID: 1418509.1)The white paper covers the definition of custom attributes in Localization Flexible Attribute Solution (LFAS) and enables retailers to perform data conversion changes. Retailers can add several new custom attributes to Items and Suppliers through non-base LFAS for Brazil and extend Retail Tax Integration Layer (RTIL) through the Oracle Retail Merchandising System (RMS), and Oracle Retail Fiscal Management System (RFM). Documents Published in RMS and ORFM Release 13.2.4 Oracle Retail Merchandising System Release Notes Oracle Retail Merchandising System Installation Guide Oracle Retail Merchandising System User Guide and Online Help Oracle Retail Sales Audit (ReSA) User Guide and Online Help Oracle Retail Merchandising System Operations Guide Oracle Retail Merchandising System Data Model Oracle Retail Merchandising Batch Schedule Oracle Retail Merchandising Implementation Guide Oracle Retail POS Suite 13.4.1 / Merchandising Operations Management13.2.4 Implementation Guide Oracle Retail Fiscal Management Data Model Oracle Retail Fiscal Management/RMS Brazil Localization Installation Guide Oracle Retail Fiscal Management/RMS Brazil Localization Implementation Guide Oracle Retail Fiscal Management User Guide and Online Help

    Read the article

  • Excel Solver vs Solver Foundation

    - by JoshReuben
    I recently read a book http://www.amazon.com/Scientific-Engineering-Cookbook-Cookbooks-OReilly/dp/0596008791/ref=sr_1_1?ie=UTF8&s=books&qid=1296593374&sr=8-1 - the Excel Scientific and Engineering Cookbook.     The 2 main tools that this book leveraged were the Data Analysis Pack and Excel Solver. I had previously been aquanted with Microsoft Solver Foundation - this is a full fledged API for solving optimization problems, and went beyond being a mere Excel plugin - it exposed a C# programmatic interface for in process and a web service interface for out of process integration. were they the same? apparently not!   2 different solver frameworks for Excel: http://www.solver.com/index.html http://www.solverfoundation.com/ I contacted both vendors to get their perspectives.   Heres what the Excel Solver guys had to say:   "The Solver Foundation requires you to learn and use a very specific modeling language (OML). The Excel solver allows you to formulate your optimization problems without learning any new language simply by entering the formulas into cells on the Excel spreadsheet, something that nearly everyone is already familiar with doing.   The Excel Solver also allows you to seamlessly upgrade to products that combine Monte Carlo Simulation capabilities (our Risk Solver Premium and Risk Solver Platform products) which allow you to include uncertainty into your models when appropriate.   Our advanced Excel Solver Products also have a number of built in reporting tools for advanced analysis of the your model and it's results"           And Heres what the Microsoft Solver Foundation guys had to say:   "  With the release of Solver Foundation 3.0, Solver Foundation has the same kinds of solvers (plus a few more) than what is found in Excel Solver. I think there are two main differences:   1.      Problems are described differently. In Excel Solver the goals and constraints are specified inside the spreadsheet, in formulas. In Solver Foundation they are described either in .Net code that uses the Solver Foundation Services API, or using the OML modeling language in Excel. 2.      Solver Foundation’s primary strength is on solving large linear, mixed integer, and constraint models. That is, models that contain arbitrary nonlinear functions (such as trig functions, IF(), powers, etc) are handled a bit better by the Excel Solver at this point. "

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Silverlight Cream for April 22, 2010 -- #844

    - by Dave Campbell
    In this Issue: Miroslav Miroslavov, David Anson, Mike Snow, Jason Alderman, Denis Gladkikh, John Papa, Adam Kinney, and CrocusGirl. Shoutout: Mike Snow is moving his blog to Silverlight Tips of The Day... his first is a repeat of number 110 of the last list, but you'll want to bookmark the page. Falling in the 'too cool not to mention' category... Pete Brown posted another MIX10 interview: New Channel 9 Video: Josh Blake on NaturalShow Multi-touch in WPF Adam Kinney announced that the Upgrade to Expression Studio v4 for free – now in writing! From SilverlightCream.com: Visuals staring at the mouse cursor Miroslav Miroslavov at SilverlightShow has a first part post up on the design of the CompleteIT site... going through the 'follow the mouse' effect that is so cool on the main page... with source. Today's DataVisualizationDemos release includes new demos showing off stacked series behavior Falling squarly into the category of "when does he sleep"... David Anson has another Visualization post up today... adding a stacked series and Text-to-Chat sample. Silverlight: Unable to start Debugging. The Silverlight managed debugging package isn’t installed. Mike Snow has a tip up about what to do if you get an "Unable to start debugging" box when you crank up VS ... not a great solution, but it's a solution. Introducing Pillbox for Windows Phone The folks at Veracity definitely have way too much fun with technology :) ... check out the WP7 app that Jason Alderman blogged about... he has a link out to another page with screenshots... oh, AND the code... Export data to Excel from Silverlight/WPF DataGrid Denis Gladkikh demonstrates using COM Interop to export to Excel from both WPF and Silverlight. He discusses isses he had and has all the source for both platforms available. Silverlight TV 21: Silverlight 4 - A Customer's Perspective John Papa has Silverlight TV number 21 up and he's chatting with Franck Jeannin of Ormetis, Ward Bell of IdeaBlade, and Dave Wolf of Cynergy Systems, all presenters in the Keynote at DevConnections. Avatar Mosaic -Experimenting with the Artefact Animator Adam Kinney spent enough time with Artefact Animator to put up a lengthy post about it including his project. Artefact Animator itself is available on CodePlex, and Adam has the link... this looks like good stuff. Windows Phone 7 Design Notes – Part2: Metro + Adrian Frutiger CrocusGirl continues her WP7 Metro discussion with a great long post on background she's researched and some of her own work with typography... a great read. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Best Design Pattern for Coupling User Interface Components and Data Structures

    - by szahn
    I have a windows desktop application with a tree view. Due to lack of a sound data-binding solution for a tree view, I've implemented my own layer of abstraction on it to bind nodes to my own data structure. The requirements are as follows: Populate a tree view with nodes that resemble fields in a data structure. When a node is clicked, display the appropriate control to modify the value of that property in the instance of the data structure. The tree view is populated with instances of custom TreeNode classes that inherit from TreeNode. The responsibility of each custom TreeNode class is to (1) format the node text to represent the name and value of the associated field in my data structure, (2) return the control used to modify the property value, (3) get the value of the field in the control (3) set the field's value from the control. My custom TreeNode implementation has a property called "Control" which retrieves the proper custom control in the form of the base control. The control instance is stored in the custom node and instantiated upon first retrieval. So each, custom node has an associated custom control which extends a base abstract control class. Example TreeNode implementation: //The Tree Node Base Class public abstract class TreeViewNodeBase : TreeNode { public abstract CustomControlBase Control { get; } public TreeViewNodeBase(ExtractionField field) { UpdateControl(field); } public virtual void UpdateControl(ExtractionField field) { Control.UpdateControl(field); UpdateCaption(FormatValueForCaption()); } public virtual void SaveChanges(ExtractionField field) { Control.SaveChanges(field); UpdateCaption(FormatValueForCaption()); } public virtual string FormatValueForCaption() { return Control.FormatValueForCaption(); } public virtual void UpdateCaption(string newValue) { this.Text = Caption; this.LongText = newValue; } } //The tree node implementation class public class ExtractionTypeNode : TreeViewNodeBase { private CustomDropDownControl control; public override CustomControlBase Control { get { if (control == null) { control = new CustomDropDownControl(); control.label1.Text = Caption; control.comboBox1.Items.Clear(); control.comboBox1.Items.AddRange( Enum.GetNames( typeof(ExtractionField.ExtractionType))); } return control; } } public ExtractionTypeNode(ExtractionField field) : base(field) { } } //The custom control base class public abstract class CustomControlBase : UserControl { public abstract void UpdateControl(ExtractionField field); public abstract void SaveChanges(ExtractionField field); public abstract string FormatValueForCaption(); } //The custom control generic implementation (view) public partial class CustomDropDownControl : CustomControlBase { public CustomDropDownControl() { InitializeComponent(); } public override void UpdateControl(ExtractionField field) { //Nothing to do here } public override void SaveChanges(ExtractionField field) { //Nothing to do here } public override string FormatValueForCaption() { //Nothing to do here return string.Empty; } } //The custom control specific implementation public class FieldExtractionTypeControl : CustomDropDownControl { public override void UpdateControl(ExtractionField field) { comboBox1.SelectedIndex = comboBox1.FindStringExact(field.Extraction.ToString()); } public override void SaveChanges(ExtractionField field) { field.Extraction = (ExtractionField.ExtractionType) Enum.Parse(typeof(ExtractionField.ExtractionType), comboBox1.SelectedItem.ToString()); } public override string FormatValueForCaption() { return string.Empty; } The problem is that I have "generic" controls which inherit from CustomControlBase. These are just "views" with no logic. Then I have specific controls that inherit from the generic controls. I don't have any functions or business logic in the generic controls because the specific controls should govern how data is associated with the data structure. What is the best design pattern for this?

    Read the article

  • Combine the Address & Search Bars in Firefox

    - by Asian Angel
    The Search Bar in Firefox is very useful for finding additional information or images while browsing but the UI space it takes up can be frustrating at times. Now you can reclaim that UI space and still have access to all that searching goodness with the Foobar extension. Note: This is about the Foobar Firefox extension and not to be confused with Foobar2000 the open source music player. Before If you have the “Search Bar” displayed there is no doubt that it is taking up valuable space in your browser’s UI. What you need is the ability to reclaim that UI space and still have the same access to your search capability as before…no more sacrificing one for a gain with the other. After As soon as you have installed the extension you can see that the top part of your browser will look much sleeker without the “Search Bar” to clutter it up. The “Search Engine Icon” will now be visible inside of your “Address Bar” as seen here. You will be able to access the same “Search Engine Menu” as before by clicking on the “Search Engine Icon”. There are two display modes for search results (setting available in the “Options”). The first one shown here is “Simple Mode” where all results are in a condensed format. Notice that not only are there search suggestions but also “Bookmarks & History” listings as well. You can literally get the best of both when conducting a search. Note: The number of entries for search suggestions and bookmark/history listings can be adjusted higher or lower in the “Options”. The second one is “Rich Mode” where the results are shown with more details. Choose the “mode” that best suits your personal style. For our first example you can see the results when we conducted a quick search on “Windows 7” (using the first of the three offerings shown from Bing). Our second example was a search for “Flowers” using our Photobucket search engine. Once again nice results opened in a new tab for us. Options The options are easy to go through. It is really nice to be able to choose the number of results that you want displayed and the format that you want them shown in. Note: Changing the “Suggestion popup style” will require a browser restart to take effect. Conclusion If you love using the “Search Bar” in Firefox but want to reclaim the UI space then you will definitely want to add this extension to your browser. The ability to customize the number of results and choose the formatting make this extension even better. Links Download the Foobar extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Combine the Address Bar and Progress Bar Together in FirefoxHide Some or All of the GUI Bars in FirefoxEnable Partial Match AutoComplete in the Firefox Address BarQuick Firefox UI TweaksAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs The Wearing of the Green – St. Patrick’s Day Theme (Firefox) Perform a Background Check on Yourself

    Read the article

  • apt-get broken + dependencies issue + many software uninstalled

    - by vnc786
    OS=ubuntu 12.04 64bit 3.2.0-29-generic what i did? apt-get purge libre* which i thought will remove LO 3.5 but, which was a big mistake after couple of moments, i realise that its is removing all other software so i did cltl-c which stopped the apt process i restarted my system after that i found that following software were missing apt-get,evince(pdf), cheese etc..here is full list http://pastebin.com/CWHrw10y i managed to install APT deb file through dpkg but now i am not able to do any installation so i just removed /var/lib/dpkg/status and created new but that didnt help apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: apt-utils coreutils debconf debconf-i18n dpkg libacl1 libapt-inst1.4 libapt-pkg4.12 libattr1 libbz2-1.0 libdb5.1 libgcc1 liblocale-gettext-perl liblzma5 libselinux1 libstdc++6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl perl-base tar tzdata xz-utils zlib1g Suggested packages: debconf-doc debconf-utils whiptail dialog gnome-utils libterm-readline-gnu-perl libgtk2-perl libnet-ldap-perl libqtgui4-perl libqtcore4-perl apt bzip2 ncompress xz-lzma The following NEW packages will be installed: apt-utils coreutils debconf debconf-i18n dpkg libacl1 libapt-inst1.4 libapt-pkg4.12 libattr1 libbz2-1.0 libdb5.1 libgcc1 liblocale-gettext-perl liblzma5 libselinux1 libstdc++6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl perl-base tar tzdata xz-utils zlib1g 0 upgraded, 24 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/9,246 kB of archives. After this operation, 29.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y E: Cannot get debconf version. Is debconf installed? debconf: apt-extracttemplates failed: No such file or directory dpkg: regarding .../libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb containing libgcc1, pre-dependency problem: libgcc1 pre-depends on multiarch-support multiarch-support is unpacked, but has never been configured. dpkg: error processing /var/cache/apt/archives/libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb (--unpack): pre-dependency problem - not installing libgcc1 No apport report written because MaxReports is reached already Errors were encountered while processing: /var/cache/apt/archives/libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb E: Internal Error, No file name for libc6 W: Could not perform immediate configuration on 'multiarch-support:amd64'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2) E: Sub-process /usr/bin/dpkg returned an error code (1) # apt-get -u dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libc6 : Depends: libgcc1 but it is not installed Depends: tzdata but it is not installed E: Unmet dependencies. Try using -f. i have tried below link Unable to install due to debconf problem How do I resolve unmet dependencies? but no help... thanks

    Read the article

  • Find a Faster DNS Server with Namebench

    - by Mysticgeek
    One way to speed up your Internet browsing experience is using a faster DNS server. Today we take a look at Namebench, which will compare your current DNS server against others out there, and help you find a faster one. Namebench Download the file and run the executable (link below). Namebench starts up and will include the current DNS server you have configured on your system. In this example we’re behind a router and using the DNS server from the ISP. Include the global DNS providers and the best available regional DNS server, then start the Benchmark. The test starts to run and you’ll see the queries it’s running through. The benchmark takes about 5-10 minutes to complete. After it’s complete you’ll get a report of the results. Based on its findings, it will show you what DNS server is fastest for your system. It also displays different types of graphs so you can get a better feel for the different results. You can export the results to a .csv file as well so you can present the results in Excel. Conclusion This is a free project that is in continuing development, so results might not be perfect, and there may be more features added in the future. If you’re looking for a method to help find a faster DNS server for your system, Namebench is a cool free utility to help you out. If you’re looking for a public DNS server that is customizable and includes filters, you might want to check out our article on helping to protect your kids from questionable content using OpenDNS. You can also check out how to speed up your web browsing with Google Public DNS. Links Download NameBench for Windows, Mac, and Linux from Google Code Learn More About the Project on the Namebench Wiki Page Similar Articles Productive Geek Tips Open a Second Console Session on Ubuntu ServerShare Ubuntu Home Directories using SambaSetup OpenSSH Server on Ubuntu LinuxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7Search For Rows With Special Characters in SQL Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app

    Read the article

  • Coming to a City Near You: Oracle Business Analytics Summits

    - by Rob Reynolds
    More and more organizations use analytics to identify new business opportunities, reduce costs, and optimize business processes. How? By making business information available throughout the enterprise—and making sure that it is relevant, actionable, and easy to access.Oracle invites you to join us for an information-packed event where you’ll learn about the latest trends, best practices, and innovations in business intelligence, analytic applications, and data warehousing.If you are an IT professional involved in BI strategy, program management, systems management, architecture, or deployment, this event is for you. You’ll find out about: New ways of deploying and delivering business intelligence on premise, in the cloud, and on mobile devices to a diverse base of business users New approaches for integrating, storing, managing, securing, and accessing your ever-growing volumes of structured and unstructured data The latest strategies for dramatically increasing the ROI of your ERP and CRM deployments Click here to view the presentation abstracts. Agenda 9:00 a.m. Registration 10:00 a.m. Keynote: Business Analytics—Be the First to Know 11:00 a.m. Break Breakout Sessions Technology and Architecture Strategy Track Business Insight and Analytic Delivery Track 11:15 a.m. Emerging Trends in Enterprise BI Platforms 11:15 a.m. Mobile BI—More than Dashboards on a Tablet 12:00 noon Networking Lunch 12:00 noon Networking Lunch 1:00 p.m. Is Your Business Intelligence Data at Risk? 1:00 p.m. Geospatial Intelligence—Location, Location, Location! 1:45 p.m. What Extreme Performance Means for Your Business 1:45 p.m. The Role of BI in Your ERP and Performance Management Initiatives 2:30 p.m. Become a BI Architect 2:30 p.m. BI Applications: Step 1 in Your ERP Upgrade or Expansion 3:00 p.m. Partner Spotlight Registration links for each city are below: New York , NY- July 26 Miami, FL - July 27 Reston, VA, July 27 Atlanta, GA - July 28 Boston, MA - July 28 Rochester, NY - Aug 2 (event link coming soon!) Menlo Park, CA - August 2 Charlotte, NC - August 3 Newport Beach, CA - August 3 Register online at the links above or call 1.800.820.5592 ext. 9218 to reserve your place.

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >