Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 626/837 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • System in low graphics, deleted linux, grub rescue, can't access windows

    - by First timer
    So I'm pretty new to Ubuntu but I managed to install it with no big problems on both my desktop and netbook. When I installed it on my brother's netbook everything went horribly wrong and now I fear the system is close to beyond repair. The problem was first that it said it did not have any space left (seemed ridiculous since it had a lot). Then Ubuntu began booting into a "System is running in low graphics mode error" which I then tried to fix, using all the tips I could find in here but nothing helped. I think the graphics error and lack of space might have been related but I can't be sure. Finally I gave up repairing Ubuntu and went for a reinstall. Shouldn't have done that! I read that I should simply open Ubuntu through a live usb and choose GParted to delete the Linux partitions so I did and rebooted accordingly. Next, I was to install Ubuntu but now I am only given the option to wipe the whole disk for Ubuntu, not install along with windows 7. If I access GParted I can still see the ntfs partitions that hold windows 7 (there are 2: one labeled RECOVERY and another labeled OS and boot) so why can't I access them? Btw. the OS and boot has a little red mark with a warning that 1 cluster is referenced to multiple times, don't know what that means. If I boot without the live usb I am sent directly into a grub rescue "black screen of the computer will follow no orders". Please, I know that the easiest might be to simply wipe the whole thing clean but there are important files and programs on windows 7. Is there a way to just access windows? It is a dell inspiron 1018 mini netbook, so I have no cd input and no windows 7 installation cd.

    Read the article

  • Error while reomving the new kernel 2.6.37

    - by Tarek
    Hi! I tried to install the new kernel but something went wrong and I'm trying to remove it now. The error massege is: mhd@Tarek-Laptop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.37-020637-generic 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 111MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 188780 files and directories currently installed.) Removing linux-image-2.6.37-020637-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.37-020637-generic /boot/vmlinuz-2.6.37-020637-generic /etc/default/grub: 33: Syntax error: EOF in backquote substitution run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.37-020637-generic.postrm line 328. dpkg: error processing linux-image-2.6.37-020637-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.37-020637-generic E: Sub-process /usr/bin/dpkg returned an error code (1) The previous unsloved error is on this bug.

    Read the article

  • Software Center - Items cannot be installed or removed until package catalog is repaired"

    - by Stephanie
    I tried to install back in time and now I keep getting the message 'items cannot be installed or removed until package catalog is repaired. I have tried sudo apt-get install -f then get Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: backintime-gnome The following packages will be upgraded: backintime-gnome 1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 1 not fully installed or removed. Need to get 0 B/39.4 kB of archives. After this operation, 24.6 kB of additional disk space will be used. Do you want to continue [Y/n]? when I click Y, I get the following message dpkg: dependency problems prevent configuration of backintime-gnome: backintime-gnome depends on backintime-common (= 1.0.7); however: Version of backintime-common on system is 1.0.8-1. dpkg: error processing backintime-gnome (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: backintime-gnome E: Sub-process /usr/bin/dpkg returned an error code (1) stephanie@stephanie-ThinkPad-T61:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of backintime-gnome: backintime-gnome depends on backintime-common (= 1.0.7); however: Version of backintime-common on system is 1.0.8-1. dpkg: error processing backintime-gnome (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: backintime-gnome

    Read the article

  • Self-serv advertising service

    - by Mystere Man
    I am seeking a self-serv advertising service for my websites, but I have a few restrictions that seem to make what i'm looking for hard to find. Specifically, I want to place "advertise here" links on my pages and allow end-users to purchase advertising on that site, page, and location. These ads will not be part of a national network. Supports multi-tenancy - That is, I have a number of domains using the same "web application" but with customized content per domain. When a customer wants to advertise on a given domain, then the ads will only appear on that domain and on that page of the domain (even though the page name may be the same across multiple domains). Supports fixed ad prices, not just CPC. I need monthly and quarterly pricing regardless of performance. Integrates with OpenX and other ad networks, so that if there is no self-serv on a given zone, it will use national advertising or direct advertising. Shiny Ads has much of this, but i'm looking for alternatives, as their prices are a bit crazy (20%) and can only do PayPal.

    Read the article

  • Android - big game universe

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • polipo E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by ICXC
    @me:/home$ sudo apt-get install polipo Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: polipo 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 198 kB of archives. After this operation, 799 kB of additional disk space will be used. Get:1 http://sy.archive.ubuntu.com/ubuntu/ precise/universe polipo amd64 1.0.4.1-1.1 [198 kB] Fetched 198 kB in 2s (97.5 kB/s) Selecting previously unselected package polipo. (Reading database ... 169595 files and directories currently installed.) Unpacking polipo (from .../polipo_1.0.4.1-1.1_amd64.deb) ... Processing triggers for doc-base ... Processing 1 added doc-base file... Processing triggers for man-db ... Processing triggers for install-info ... Processing triggers for ureadahead ... Setting up polipo (1.0.4.1-1.1) ... Starting polipo: Couldn't open config file /etc/polipo/config: 2. invoke-rc.d: initscript polipo, action "start" failed. ****dpkg: error processing polipo (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: polipo E: Sub-process /usr/bin/dpkg returned an error code (1)****

    Read the article

  • Fusion CRM Release 7 RCDs and TOIs Now Available!

    - by Richard Lefebvre
    Fusion CRM Release 7 Release Content Documents (RCD) and Transfer of Information (TOI) presentations are now available. In addition, you can find 245 new or changed product features for Release 7 on Oracle Product Features. All the new RCDs and TOIs can be found on the Fusion Learning Center: Customer Relationship Management TOIs - Customer Center, Define Segmentation Strategy, Enterprise Contracts, Oracle Social Network, Sales, and Territory Management Business Process Model (BPM) RCDs - Customer Service, Marketing, Order Fulfillment, and Sales Financials BPM RCDs - Asset Lifecycle Management, Cash and Treasury Management, and Financial Control and Reporting Human Capital Management TOIs - Workforce Development, Compensation, Benefits, Worker Performance, Workforce Profiles, Enterprise Structures, Talent Review, Manage Transaction and Batch Processing, Delete HCM Storage Data, and Load Batch Data BPM RCDs - Compensation Management, Enterprise Information Management, Workforce Deployment, and Workforce Development Procurement TOI - Requisitions BPM RCD - Procurement Project Portfolio Management TOIs - Project Resources, Evaluate and Assign Resources, Maintain Resource Assignments, Manage Resource Demand, Manage Resource Supply, Manage Resource Utilization and Analytics, Project Management, Set Up Project Management BPM RCD - Project Management Supply Chain Management TOIs - Manage New Product Definition and Approval, Manage Product Change Orders, Product Hub, Define Item Class BPM RCDs - Materials Management and Logistics, Product Management and Supply Chain Planning Partners and customers can access the content from the following locations: Partner access: BPM RCDs and TOIs Oracle Partner Network Fusion Learning Center New Feature RCDs Oracle Product Features Customer access: TOIs My Oracle Support (Note:1528594.1) BPM RCDs My Oracle Support (Note:1559828.1) New Feature RCDs Oracle Product Features

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • Suggestions for a Self-serv advertising service

    - by Mystere Man
    I am seeking a self-serv advertising service for my websites, but I have a few restrictions that seem to make what i'm looking for hard to find. Specifically, I want to place "advertise here" links on my pages and allow end-users to purchase advertising on that site, page, and location. These ads will not be part of a national network. Supports multi-tenancy - That is, I have a number of domains using the same "web application" but with customized content per domain. When a customer wants to advertise on a given domain, then the ads will only appear on that domain and on that page of the domain (even though the page name may be the same across multiple domains). Supports fixed ad prices, not just CPC. I need monthly and quarterly pricing regardless of performance. Integrates with OpenX and other ad networks, so that if there is no self-serv on a given zone, it will use national advertising or direct advertising. Shiny Ads has much of this, but i'm looking for alternatives, as their prices are a bit crazy (20%) and can only do PayPal.

    Read the article

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • Acer Aspire 5542G overheating with ubuntu/kubuntu 12.04

    - by james
    I have an Acer Aspire 5542G laptop purchased couple of years ago. All these days, i used windows 7 on it . Then I tried ubuntu 12.04 . Everything was fine except the overheating issue. I updated ubuntu with all security fixes and available updates but nothing solved the problem. With idle use like internet browsing, the cpu fan speeds up a lot and i can feel very hot air coming from the vent (comparable to playing serious 3d game in windows). But it will not go to a point of freeze and shutdown. But as long as im using it, with no intensive tasks at all, the laptop stays too hot. This wasn't the case with windows7. In windows 7 the fan will not rotate at all with normal use. I heard there was manufacturing defect with some acer laptops, but i think it wasn't the case with my laptop since windows7 runs perfectly. I updated the bios to latest version. I cleaned dust in the vents. I tried kubuntu 12.04 up-to-date. Nothing solved the issue. My laptop specs are: CPU : AMD turion2 x2 M500 @ 2.2GHz GPU : AMD Mobility Radeon HD4570 3GB RAM and 320GB hard disk.

    Read the article

  • Does F# kill C++?

    - by MarkPearl
    Okay, so the title may be a little misleading… but I am currently travelling and so have had very little time and access to resources to do much fsharping – this has meant that I am right now missing my favourite new language. I was interested to see this post on Stack Overflow this evening concerning the performance of the F# language. The person posing the question asked 8 key points about the F# language, namely… How well does it do floating-point? Does it allow vector instructions How friendly is it towards optimizing compilers? How big a memory foot print does it have? Does it allow fine-grained control over memory locality? Does it have capacity for distributed memory processors, for example Cray? What features does it have that may be of interest to computational science where heavy number processing is involved? Are there actual scientific computing implementations that use it? Now, I don’t have much time to look into a decent response and to be honest I don’t know half of the answers to what he is asking, but it was interesting to see what was put up as an answer so far and would be interesting to get other peoples feedback on these questions if they know of anything other than what has been covered in the answer section already.

    Read the article

  • A generic Re-usable C# Property Parser utility [on hold]

    - by Shyam K Pananghat
    This is about a utility i have happened to write which can parse through the properties of a data contracts at runtime using reflection. The input required is a look like XPath string. since this is using reflection, you dont have to add the reference to any of your data contracts thus making pure generic and re- usable.. you can read about this and get the full c# sourcecode here. Property-Parser-A-C-utility-to-retrieve-values-from-any-Net-Data-contracts-at-runtime Now about the doubts which i have about this utility. i am using this utility enormously i many places of my code I am using Regex repeatedly inside a recursion method. does this affect the memmory usage or GC collection badly ?do i have to dispose this manually. if yes how ?. The statements like obj.GetType().GetProperty() and obj.GetType().GetField() returns .net "object" which makes difficult or imposible to introduce generics here. Does this cause to have any overheads like boxing ? on an overall, please suggest to make this utility performance efficient and more light weight on memmory

    Read the article

  • Is sending data to a server via a script tag an outdated paradigm?

    - by KingOfHypocrites
    I inherited some old javascript code for a website tracker that submits data to the server using a script url: var src = "http://domain.zzz/log/method?value1=x&value2=x" var e = document.createElement('script'); e.src = src; I guess the idea was that cross domain requests didn't haven't to be enabled perhaps. Also it was written back in 2005. I'm not sure how well XmlHttpRequests were supported at the time. Anyone could stick this on their website and send data to our server for logging and it ideally would work in most any browser with javascript. The main limitation is all the server can do is send back javascript code and each request has to wait for a response from the server (in the form of a generic acknowledgement javascript method call) to know it was received, then it sends the next. I can't find anyone doing this online or any metrics as to whether this faster or more secure than XmlHttpRequests. I don't know if this is just an old way of doing things or it's still the best way to send data to the server when you are mostly trying to send data one way and you need the best performance possible. So in summary is sending data via a script tag an outdated paradigm? Should I abandon in favor of using XmlHttpRequsts?

    Read the article

  • Challenging Job after Graduate Studies

    - by sriram
    I worked with an M.N.C developing web applications in Java/J2EE related technologies(includes JSF,struts,hibernate etc.) now I quit my job to pursue Graduate Studies in the U.S.A. So I am a student in the middle of my Graduate studies. I had enough of developing mere CRUD applications in J2EE now I want to work in something exciting. The problem is I can't say what exactly but I can give you an examples. Say developing new JDK libraries or writing a kernel for some O.S. or something like that. So I have five questions here. Is it true that people in R & D often use C++ because of higher performance in that case should I consider switching my platform to C/C++? How should I use my time I have one year to graduate to prepare myself for Jobs Interviews for such positions? (e.g. Reading books on Algorithms etc.) How do I know about these jobs and how do I apply to those Jobs? Is it the right time for me to think about Jobs? Am I over ambitious because I am not in a Ivy League, just a normal school? (My GPA is not so high unfortunately).

    Read the article

  • Copy to USB memory stick really slow?

    - by Eloff
    When I copy files to the USB device, it takes much longer than in windows (same usb device, same port) it's faster than USB 1.0 speeds (1MB/s) but much slower than USB 2.0 speeds (12MB/s). To copy 1.8GB takes me over 10 minutes (it should be < 3 min.) I have two identical SanDisk Cruzer 8GB sticks, and I have the same problem with both. I have a super talent 32GB USB SSD in the neighboring port and it works at expected speeds. The problem I seem to see in the GUI is that the progress bar goes to 90% almost instantly, completes to 100% a little slower and then hangs there for 10 minutes. Interrupting the copy at this point seems to result in corruption at the tail end of the file. If I wait for it to complete the copy is successful. Any ideas? dmesg output below: [64059.432309] usb 2-1.2: new high-speed USB device number 5 using ehci_hcd [64059.526419] scsi8 : usb-storage 2-1.2:1.0 [64060.529071] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.14 PQ: 0 ANSI: 2 [64060.530834] sd 8:0:0:0: Attached scsi generic sg4 type 0 [64060.531925] sd 8:0:0:0: [sdd] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [64060.533419] sd 8:0:0:0: [sdd] Write Protect is off [64060.533428] sd 8:0:0:0: [sdd] Mode Sense: 03 00 00 00 [64060.534319] sd 8:0:0:0: [sdd] No Caching mode page present [64060.534327] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.537988] sd 8:0:0:0: [sdd] No Caching mode page present [64060.537995] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.541290] sdd: sdd1 [64060.544617] sd 8:0:0:0: [sdd] No Caching mode page present [64060.544619] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.544621] sd 8:0:0:0: [sdd] Attached SCSI removable disk

    Read the article

  • dvcs - is "clone to branch" a common workflow?

    - by Tesserex
    I was recently discussing dvcs with a coworker, because our office is beginning to consider switching from TFS (we're a MS shop). In the process, I got very confused because he said that although he uses Mercurial, he hadn't heard of a "branch" or "checkout" command, and these terms were unfamiliar to him. After wondering how it was possible that he didn't know about them and explaining how dvcs branches work "in place" on your local files, he was quite confused. He explained that, similar to how TFS works, when he wants to create a "branch" he does it by cloning, so he has an entire copy of his repo. This seemed really strange to me, but the benefit, which I have to concede, is that you can look at or work on two branches simultaneously because the files are separate. In searching this site to see if this has been asked before I saw a comment that many online resources promote this "clone to branch" methodology, to the poster's dismay. Is this actually common in the dvcs community? And what are some of the pros and cons of going this way? I would never do it since I have no need to see multiple branches at once, switching is fast, and I don't need all the clones filling up my disk.

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • Is SSD TRIM support still automatic in 12.10?

    - by dam
    Folks I had automatic TRIM working on my laptop running Ubuntu Precise. As in the TRIM guide I added discard to mount options in /etc/fstab, and hdparm --read-sector read 0s immediately after a rm && sync. Using the very same hardware, laptop and SSD, TRIM seems no longer to be automatic after upgrading to Quantal. I recognise the test in the guide I mentioned above may not necessarily work. SSD erase blocks and all that. But Quantal is at least different. After deleting the file and syncing, its data are still on disk and unerased even after waiting several minutes. fstrim will then 0 the dead file's blocks. Once. Repeat the same test five minutes later, and fstrim does nothing. I figure this is probably really a kernel issue, but that box is too black for my spelunking torch. I'm prepared to believe that kernel 3.5 knows what I want better than I do, and all is well despite appearances, but it looks for all the world like TRIM isn't quite all there any more. Anybody have the scoop on TRIM in Quantal/kernel 3.5?

    Read the article

  • algorithm for Virtual Machine(VM) Consolidation in Cloud

    - by devansh dalal
    PROBLEM: We have N physical machines(PMs) each with ram Ri, cpu Ci and a set of currently scheduled VMs each with ram requirement ri and ci respectively Moving(Migrating) any VM from one PM to other has a cost associated which depends on its ram ri. A PM with no VMs is shut down to save power. Our target is to minimize the weighted sum of (N,migration cost) by migrating some VMs i.e. minimize the number of working PMs as well as not to degrade the service level due to excessive migrations. My Approach: Brute Force approach is choosing the minimum loaded PM and try to fit its VMs to other PMs by First Fit Decreasing algorithm or we can select the victim PMs and target PMs based on their loading level and shut down victims if possible by moving their VMs to targets. I tried this Greedy approach on the Data of Baadal(IIT-D cloud) but It isn't giving promising results. I have also tried to study the Ant colony optimization for dynamic VM consolidating but was unable to understand very much. I used the links. http://dumas.ccsd.cnrs.fr/docs/00/72/52/15/PDF/Esnault.pdf http://hal.archives-ouvertes.fr/docs/00/72/38/56/PDF/RR-8032.pdf Would anyone please explain the solution or suggest any new approach for better performance soon. Thanks in advance.

    Read the article

  • Does Google Analytics exclude Campaign traffic from Facebook in the Social reports?

    - by user1612223
    For a while we have used campaign tags when putting posts on Facebook so that we can run campaign reports in Google analytics on those links. However it appears that traffic from those links are being excluded in Google's Social reports. For example between 7/20 and 8/19 I'm seeing 123 Visits where Facebook is the source in my Campaigns report, but only 29 Visits where Facebook is the source in my Social Sources report. Main questions: Does Google exclude campaign traffic from it's social reports? If it does, is there any way to reconcile that so that the traffic shows up in both reports? If it doesn't, what could be causing the vast discrepancy? One observer noted that we are setting the Medium to "Post" when passing the campaign parameters, and that Google may only allow "Referral" traffic in it's social reports (Just speculation). In that case we could potentially change the Medium to "Referral", but that would undermine some of our strategy in being able to set different mediums. I have also considered that maybe the campaign traffic came to the site several times, and the social report may count the same user as less visits, however over 70% of the Facebook campaign traffic is new traffic, so at a minimum there would need to be over 85 Visits on the Social side for that argument to be valid. I've done several searches for any information on this topic, and haven't run across much of anything. I did post the same question on Google's Product Forum and have not gotten a response. The title of that question was 'Facebook Campaign Traffic Not Showing in Social Reports'. The inability to pass campaign data on Facebook posts would make evaluating the performance of those specific posts very difficult, so I'm hoping there is a solution to this.

    Read the article

  • File browsing on Nautilus extremely slow after upgrade to 12.04

    - by Tobelli
    One month ago I updated (no fresh install) to 12.04. Since then nautilus got extremely slow. When I open a folder that contains many subfolders I sometimes have to wait 4 seconds until everything is displayed. This has never been like that before, in previous versions I could always browse between my files extremely fast. I looked in "additional drivers" and changed from Nvidia current-version-update to the recommended drivers. This drastically increased the performance and speed of file browsing, unfortunately just for a couple of days. Now I am stuck again with the very slow Nautilus. I also tried to install the latest nvidia driver like it was suggested here: http://www.techlw.com/2012/03/install-nvidia-drivers-on-ubuntu-1204.html Did not work at all. Also when using the dash to try to find files it does not respond properly: does not find files or loads for ages until the file is displayed. I am working on an Acer Notebook with Intel® Core™ i5 CPU M 430 @ 2.27GHz × 4 6GB RAM GeForce GT 320M/PCIe/SSE2 64 Bit Ubuntu 12.04

    Read the article

  • Join Us!! Live Webinar: Using UPK for Testing

    - by Di Seghposs
    Create Manual Test Scripts 50% Faster with Oracle User Productivity Kit  Thursday, March 29, 2012 11:00 am – 12:00 pm ET Click here to register now for this informative webinar. Oracle UPK enhances the testing phase of the implementation lifecycle by reducing test plan creation time, improving accuracy, and providing the foundation for reusable training documentation, application simulations, and end-user performance support—all critical assets to support an enterprise application implementation. With Oracle UPK: Reduce manual test plan development time - Accelerate the testing cycle by significantly reducing the time required to create the test plan. Improve test plan accuracy - Capture test steps automatically using Oracle UPK and import those steps directly to any of these testing suites eliminating many of the errors that occur when writing manual tests. Create the foundation for reusable assets - Recorded simulations can be used for other lifecycle phases of the project, such as knowledge transfer for training and support. With its integration to Oracle Application Testing Suite, IBM Rational, and HP Quality Center, Oracle UPK allows you to deploy high-quality applications quickly and effectively by providing a consistent, repeatable process for gathering requirements, planning and scheduling tests, analyzing results, and managing  issues. Join this live webinar and learn how to decrease your time to deployment and enhance your testing plans today! 

    Read the article

  • CodePlex Daily Summary for Sunday, August 24, 2014

    CodePlex Daily Summary for Sunday, August 24, 2014Popular ReleasesCS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.31.0: Fixed problem with menu item 'Plugins->CS-SCript->Debug' invoking 'Run' instead of 'Debug'.Media Companion: Media Companion MC3.599b: New:* MC - Remember last monitor Media Companion Ran on, and re-open there if available. * MC - If notepad++ installed, use for opening nfo XML files. * Movie - Fix: Fanart & Poster searching using 'Google Search' button opened multiple browser tabs, one per search word. * Movie - Allow Re-scrape with XBMC TMDB Scraper, if IMDB Id is present. * TV - added option to save Season Poster into season folder as folder.jpg Fixed:* Movie - Table view error if a row header was selected. * Movie - Tab...ASP.NET Identity 2.0 Azure Table Storage: Release 1.2.5.2: Optimizing the login and email index queries. Optimizing IsInRoleAsync operation. 100% unit test pass and 100% code coverage. Full sample source available as a download or in the source branch /Releases/1.2.x.x/sample. Sample code doesn't require an Azure account but does require the Azure SDK with the Storage Emulator at a minimum for running locally. Full suite of unit tests against this assembly at 100% pass rate against the Azure Local Emulator and against a live Azure Storage acc...BugNET Issue Tracker: BugNET 1.6.327: This release contains fixes and enhancements from the previous 1.6.315 release. Please read our release notes for BugNET 1.6.327: http://blog.bugnetproject.com/2014/08/23/bugnet-1-6-327-and-bugnet-pro-1-5-99-released/DIII Save Editor: ROS Alpha 1.2.14.100: initial Ros alpha release please report all bugsSEToolbox: SEToolbox 01.044.014 Release 2: Fixed Ship name not saving. Fixed broken cubes view Bug. Fixed cast VRage.MyFixedPoint error when opening games with Meteors. Added checkbox when Importing 3d model to Export ship, to fill it as solid.CS-Script Source: Release v3.8.5: Fixed problem with the warnings getting hidden in case of the successful compilation cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationMagick.NET: Magick.NET 7.0.0.0002: Magick.NET linked with ImageMagick 7babelua: 1.6.7.0: V1.6.7.0 - 2014.8.21New feature: add a file search window ( ctrl+1 or ALT+L ), like The file search in VC Assistant; Stability improvement: performance improvement when BabeLua load/unload; performance improvement when debugger load lua files;XboxConsole: XboxConsole 2.0.40820.0: Updated release with added support for: - August XDK - Party API (See updated documentation) Supports the following XDK versions: April 2014 May 2014 June 2014 (all QFEs) July 2014 (all QFEs) August 2014Open NFe: RDI Open NFe 3.0 (alpha): Atualização para o layout 3.10 da NFe.AssaultCube Reloaded: Release 2.6.1: Windows XP USERS must download the patch in addition to the Windows package. Some changes couldn't make it to 2.6, and a recode was started before 2.6.1 could be released. However, the version 2.6.1 is used to represent the first beta release of 2.7. Changelog: Recoded on AC 1.2 as the base version (likely less crashes) Class manager Simpler killfeed, removed kill messages Hide KILL indicator in classic, update at 4 second intervals Disable spawn protection upon firing the first sh...SysLog Server: SysLogServer: This is not a commersial product, use on your own responsibilityMolGridCal & MolCal: MolGridCal tutorial v1.1: Update the contents for grid computing virtual screening.MSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksCMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.New ProjectsDnn Picasa Image Gallery: The DnnC Picasa Image Gallery module allow you to display your Picasa web albums and there photos within your Dnn website.Hot Mess: Hot Mess game software and arduino firmware.Kinect HD Face Sample in unmanaged C++: This is a C++ unmanaged project which is based on the Kinect For Windows v2 SDK sample: FaceBasics. Instead of using the Face source, it utilizes the HDFaceModbus Master: A MODBUS Master application for Windows supporting all MODBUS function codes, a plugin interface and scripting interface.Path Finding on Wireless Sensor Network: Path Finding on Wireless Sensor Networkperilla: enhanced c++ templateXiamiSigLite-Silent: ???????,??Win7??。

    Read the article

  • How will closures in Java impact the Java Community?

    - by Ryan Delucchi
    It is one of the most talked about features planned for Java: Closures. Many of us have been longing for them. Some of us (including I) have grown a bit impatient and have turned to scripting languages to fill the void. But, once closures have finally arrived to Java: how will they effect the Java Community? Will the advancement of VM-targetted scripting languages slow to a crawl, stay the same, or acclerate? Will people flock to the new closure syntax, thus turning Java code-bases all-around into more functionally structured implementations? Will we only see closures sprinkled in Java throughout? What will be the effect on tool/IDE support? How about performance? And finally, what will it mean for Java's continued adoption, as a language, compared with other languages that are rising in popularity? To provide an example of one of the latest proposed Java Closure syntax specs: public interface StringOperation { String invoke(String s); } // ... (new StringOperation() { public invoke(String s) { new StringBuilder(s).reverse().toString(); } }).invoke("abcd"); would become ... String reversed = { String s => new StringBuilder(s).reverse().toString() }.invoke("abcd"); [source: http://tronicek.blogspot.com/2007/12/closures-closure-is-form-of-anonymous_28.html]

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >