Search Results

Search found 6598 results on 264 pages for 'opcode cache'.

Page 141/264 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • La Fondation Apache abandonne le projet Excalibur, faute d'avancées dans son développement

    La Fondation Apache abandonne le projet Excalibur, faute d'avancées dans son développement La Fondation Apache vient d'annoncer l'abandon de son projet open-source Excalibur. La raison invoquée pour mettre fin à l'existence de ce conteneur Java d'Inversion of Control est l'inactivité de son développement. Cet outil léger basé sur Java, qui permettait aux développeurs d'apprendre à un conteneur basé sur un logiciel "d'expliquer" aux composants comment interagir dans une application, comprenait aussi des composants pour s'occuper du cache et du XML. La mailing liste officiel d'Excalibur reste ouverte. Les autres ressources, quant à elles (site Internet, Wikis, téléchargements, bug tracker), ne seront plus dis...

    Read the article

  • Web Services Example - Part 2: Programmatic

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 2 of our Web Service examples.  In this posting we'll take a look at using a SOAP Web Service but calling it programmatically in code and parsing the return into a bean. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part1 Defining our Web Service: Just like our first installment, we are using the same public weather forecast web service provided free by CDYNE Corporation.  Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. We're going to concentrate on the same two web service methods, GetCityForecastByZIP and GetWeatherInformation. Defing the Application: The application setup is identical to the Weather1 version.  There are some improvements to the data that is displayed as part of this example though.  Now we are able to show the associated image along with each forecast line when using the Forecast By Zip feature.  We've also added the temperature Hi/Low values into the UI. Summary of Fundamental Changes In This Application The most fundamental change is that we're binding the UI to the Bean Data Controls instead of directly to the Web Service Data Controls.  This gives us much more flexibility to control the shape of the data and allows us to do caching of the data outside of the Web Service.  This way if your application is, say offline, your bean could still populate with data from a local cache and still show you some UI as opposed to completely failing because you don't have any connectivity. In general we promote this type of programming technique with ADF Mobile to insulate your application from any issues with network connectivity. What's different with this example? We have setup the Web Service DC the same way but now we have managed beans to process the data.  The following classes define the "Model" of our application:  CityInformation-CityForecast-Forecast, WeatherInformation-WeatherDescription.  We use WeatherBean for UI interaction to the model layer.  If you look through this example, we don't really do that much with the java code except use it to grab the image URL from the weather description.  In a more realistic example, you might be using some JDBC classes to persist the data to a local database. To have a good architecture it is always good to keep your model and UI layers separate.  This gets muddied if you start to use bindings on a page invoked from Java code and this java code starts to become your "model" layer.  Since bindings are page specific, your model layer starts to become entwined with your UI.  Not good!  To help with this, we've added some utility functions that let you invoke DC methods without having a binding and thus execute methods from your "model" layer without requiring a binding in your page definition.  We do this with the invokeDataControlMethod of the AdfmfJavaUtilities class.  An example of this method call is available in line 95 of WeatherInformation.java and line 93 of CityInformation.Java. What's a GenericType? Because Web Service Data Controls (and also URL Data Controls AKA REST) use generic name/value pairs to define their structure and don't have strongly typed objects, these are actually stored internally as GenericType objects.  The GenericType class is simply a property map of name/value pairs that can be hierarchical.  There are methods like getAttribute where you supply the index of the attribute or it's string property name.  Why is this important to know?  Because invokeDataControlMethod returns GenericType objects and developers either need to parse these GenericType objects themselves or use one of our helper functions. GenericTypeBeanSerializationHelper This class does exactly what it's name implies.  It's a helper class for developers to aid in serialization of GenericTypes to/from java objects.  This is extremely handy if you have a large GenericType object with many attributes (or you're just lazy like me!) and you just want to parse it out into a real java object you can use more easily.  Here you would use the fromGenericType method.  This method takes the class of the Java object you wish to return and the GenericType as parameters.  The method then parses through each attribute in the GenericType and uses reflection to set that same attribute in the Java class.  Then the method returns that new object of the class you specified.  This is obviously very handy to avoid a lot of shuffling code between GenericType and your own Java classes.  The reverse method, toGenericType is also available when you want to go the other way.  In this case you supply the string that represents the package location in the DataControl definition (Example: "MyDC.myParams.MyCollection") and then pass in the Java object you have that holds the data and a GenericType is returned to you.  Again, it will use reflection to calculate the attributes that match between the java class and the GenericType and call the getters/setters on those. Issues and Possible Improvements: In the next installment we'll show you how to make your web service calls asynchronously so your UI will fill dynamically when the service call returns but in the meantime you show the data you have locally in your bean fed from some local cache.  This gives your users instant delivery of some data while you fetch other data in the background.

    Read the article

  • Install a Mirror without downloading all the packages in the official repository

    - by Sam
    I first gonna explain the situation : ( The two PCs are running Ubuntu 12.04 ) I have a Laptop which is connected to a wifi connection, and a Desktop which can not be connected to Internet ( the modem is too far from it ), and i want to install some software to the last one. ( the two PCs are connected with an Ethernet cable ) I've already searched for a solution, but all i found was the use of some softwares that should have been already installed on the "Internet-less PC". ( Keryx, APTonCD ... ) What I want to do is to create a mirror in my laptop which contain the packages i have in this one ( situated in /var/cache/apt/archive ) and i don't want to download all the packages from the official repository, I don't need them. Can someone tell me if this is possible ? Thank you.

    Read the article

  • Switching to WIF SessionMode in ASP.NET

    - by Your DisplayName here!
    To make it short: to switch to SessionMode (cache to server) in ASP.NET, you need to handle an event and set a property. Sounds easy – but you need to set it in the right place. The most popular blog post about this topic is from Vittorio. He advises to set IsSessionMode in WSFederationAuthenticationModule_SessionSecurityTokenCreated. Now there were some open questions on forum, like this one. So I decided to try it myself – and indeed it didn’t work for me as well. So I digged a little deeper, and after some trial and error I found the right place (in global.asax): void WSFederationAuthenticationModule_SecurityTokenValidated( object sender, SecurityTokenValidatedEventArgs e) {     FederatedAuthentication.SessionAuthenticationModule.IsSessionMode = true; } Not sure if anything has changed since Vittorio’s post – but this worked for me. While playing around, I also wrote a little diagnostics tool that allows you to look into the session cookie (for educational purposes). Will post that soon. HTH

    Read the article

  • All video thumbnails fail to be generated

    - by Forage
    Not a single video thumbnail is being generated and shown in Nautilus. The folder ~/.cache/thumbnails/fail/gnome-thumbnail-factory/ keeps getting filled for all of them. I tried removing all the thumbnails from the folder, reinstalling the gstreamer-plugins-... and totem packages, changing the thumbnail settings (Always, 4 GB) in the Preview section of the Nautilus preferences. All to no avail. Some recommend to install packages like libxine1 and ffmpegthumbnailer but this didn't not solve it either. There used to be an .xsession-errors error log file generated in the home folder in previous version of Ubuntu but that doesn't seem to be the case any more. I'm running Ubuntu 12.10 GNOME remix (x64) with the GNOME3 ppa packages installed. What could be the cause of the problem and how can I fix it?

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • How to determine the version and origin of proprietary drivers installed by Additional Drivers?

    - by Bribles
    How can I tell which version and from which repository the Additional Drivers tool is trying to install the fglrx graphics driver? It says that I have a different version of the driver in use. I installed the driver from maverick/restricted and apt-cache tells me it's from a regular Ubuntu mirror. The installed version is the same as the candidate version. Can I get Additional Drivers to tell me what it would install if I activated the driver through it? Is it possible Additional Drivers just assumes it's a different version since it was installed by a different process?

    Read the article

  • My colleague can't visit our website through her provider after long downtime

    - by Peter Westerlund
    We did a frontpage update some days ago that caused the site to crash. The site was down for several hours. After troubleshooting, we concluded that we needed to cache more content. It had been run too many queries. After solving that and rebooting of server, we here in Sweden and Norway were again able to visit the site. But a colleague in Tunisia couldn't. It seems to work from another internet provider but not her own. What could have happened? And what should we do? Edit: I should add: She is able to visit the site through tunnel at anonymouse.org.

    Read the article

  • Trouble with 12.10 lag

    - by Brennan
    Well basicly lately I have been having lag problems with 12.10. I will post my specs, but before the update to 12.10, it said that I had intel graphics. Now it says I have Gallium. My specs: *Memory: 3.9 GiB *Processor: Pentium(R) Dual-Core CPU E5500 @ 2.80GHz × 2 *Graphics: Gallium 0.4 on llvmpipe (LLVM 3.2, 128 bits) (used to say intel graphics) *OS Type: 32-bit *Disk: 486.1 GB The output of the command sudo apt-cache check is this: E: Invalid operation check The output of the command sudo lspci -nnk | grep -A5 VGA is this: 00:02.0 VGA compatible controller [0300]: Intel Corporation 4 Series Chipset Integrated Graphics Controller [8086:2e32] (rev 03) Subsystem: ASUSTeK Computer Inc. Device [1043:836d] Kernel driver in use: i915 00:1b.0 Audio device [0403]: Intel Corporation NM10/ICH7 Family High Definition Audio Controller [8086:27d8] (rev 01) Subsystem: ASUSTeK Computer Inc. Device [1043:8445] Kernel driver in use: snd_hda_intel

    Read the article

  • Mailgun Is Not Detecting My New MX Records

    - by Tyler Crompton
    When I issue a DiG command to verify my MX records, I get the following output: $ dig example.com MX ; <<>> DiG 9.9.5-3-Ubuntu <<>> example.com MX ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47700 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 5 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN MX ;; ANSWER SECTION: example.com. 85468 IN MX 10 mxa.mailgun.org. example.com. 85468 IN MX 10 mxb.mailgun.org. ;; REMAINDER OF OUTPUT REMOVED FOR BREVITY However, when I click "Check DNS Records Now" on Mailgun, it verifies the changes to the TXT and CNAME records but says that my MX records have not been changed. Type | Priority | Enter This Value | Current Value -----+----------+------------------+-------------------- MX | 10 | mxa.mailgun.org | 10 mail.example.com MX | 10 | mxb.mailgun.org | 10 mail.example.com I updated these records three to fours ago. I know it said to wait up to twenty-four to forty-eight hours. But I feel that if it detected the other DNS changes, then it should detect the MX record changes. Am I being impatient or is this a legitimate concern? What do you suggest I do? Note: I'd create a Mailgun tag for this; I feel that it'd be appropriate, but I don't have enough reputation to do so.

    Read the article

  • Advertising cookies ( ad sense ) on google chrome [closed]

    - by zack
    ok first I'm not sure if this is right stackexchange site to ask but I'm not sure if there is better one as question is quite general When I visit one site, and if this site runs google adsense campaign, I can see its ads on other sites. I suppose this is cookie from this site and than it's showing ads using the cookie However, I removed this specific cookie and cleared cache, and I can still see this same ad showing on other sites running adsense My question is, is there is something wrong with google chrome, is there a possibility that Google stores data within google chrome itself? If not, how else they could trace which ads to show?

    Read the article

  • dig show only answer

    - by Zulakis
    I want dig only to show the answer of my query. Normally, it prints out alot of additional info like this: ;; <<>> DiG 9.7.3 <<>> google.de ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55839 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;google.de. IN A ;; ANSWER SECTION: google.de. 208 IN A 173.194.69.94 ;; Query time: 0 msec ;; SERVER: 213.133.99.99#53(213.133.99.99) ;; WHEN: Sun Sep 23 10:02:34 2012 ;; MSG SIZE rcvd: 43 I want this to be reduced to just the answer section. dig has alot of options, a good one i found was +noall +answer ; <<>> DiG 9.7.3 <<>> google.de +noall +answer ;; global options: +cmd google.de. 145 IN A 173.194.69.94 It leaves out most of the stuff, but still shows this options thing. Any ideas on how to remove it using dig options? I sure could cut it out using other tools, but a option with dig itself would be the cleanest and nicest.

    Read the article

  • Share on Facebook does not show thumbnail images

    - by matt74tm
    I have a PHP application which has a "Share on Facebook" button that, On the development server shows the thumbnail images correctly and allows the user to select between them On the live server, it does NOT show the thumbnail images at all. The relevant portion of the .htaccess file is: # Set up caching on media files for 2 days <FilesMatch "\.(gif|jpg|jpeg|png|flv)$"> ExpiresDefault A172800 Header append Cache-Control "public" </FilesMatch> I'm using the exact same set of php files and .htaccess, but the server configuration is different. What could be causing this? Note that the text appears fine. Edit1 We are also doing some URL rewriting related to images in the .htaccess (on both servers): ... RewriteRule ^.*/content/image/(.*)$ content/image/$1 [L] ... RewriteRule ^.*/images/(.*)$ images/$1 [L] ... Would that be somehow making a difference? Images appear fine all throughout the site. (I posted this question earlier as http://stackoverflow.com/questions/4142597/share-on-facebook-does-not-show-thumbnail-images) )

    Read the article

  • Install a Mirror without downloading all packages from official repository

    - by Sam
    I first gonna explain the situation: I have a Laptop which is connected to a wifi connection, and a Desktop which can not be connected to Internet (the modem is too far from it), and i want to install some software on the last one. (The two PCs are running Ubuntu 12.04, and are connected with an Ethernet cable) I've already searched for a solution, but all I've found was the use of some softwares that should have been already installed on the "Internet-less PC". ( Keryx, APTonCD ... ) What I want to do is to create a mirror in my laptop which contain the packages i have in this one ( situated in /var/cache/apt/archive ) and i don't want to download all the packages from the official repository, I actually don't need them. Can someone tell me if it is possible to do that ? Thanks,

    Read the article

  • Caching strategies - LRU, MRU, Clock-Pro

    - by golgofa
    I am going to write a bachelor's science work on caching strategies and really, can't find any links to specifications or full descriptions of some of them. Only something like summaries from wikipedia. Please, help with some links on LRU, MRU caching and new-one - Clock Pro. Thanks a lot. All links are very useful for me. The purpose of work - is to compare different cache strategies to get more effiency. It based on WebApplication with ejb 2.0, so algorithm's will be implemented there, espesially in ejbLoad() and ejbFindByPrimarKey(). Also, one of aspects of this application - it will use not common scheme of tables in database - it based on metamodel. So, if you had any experience on this topic, i would be grateful to take some of your knowledge)

    Read the article

  • Is MUMPS alive?

    - by ern0
    At my first workplace we were using Digital Standard MUMPS on a PDP 11-clone (TPA 440), then we've switched to Micronetics Standard MUMPS running on a Hewlett-Packard machine, HP-UX 9, around early 90's. Is still MUMPS alive? Are there anyone using it? If yes, please write some words about it: are you using it in character mode, does it acts as web server? etc. (I mean Caché, too.) If you've been used it, what was your feelings about it? Did you liked it?

    Read the article

  • Ask the Readers: How Do You Score Free Wi-Fi While Traveling?

    - by Jason Fitzpatrick
    The holiday season is in full swing and that means many of us will be traveling–and searching for Wi-Fi nodes in the process. Help your fellow readers out by sharing your best Wi-Fi finding tips and tricks. Internet access is a necessity for the modern traveler but finding it is a bit more difficult than simply plugging into your home Wi-Fi. This week we want to hear all about your tips, tricks, and methods for scoring free Wi-Fi service in your interstate (and even international) travels. How do you keep the bounty of the internet flowing to your laptops, netbooks, tablets, and smart phones as you traverse the world? Sound off in the comments with your best tips and then check back on Friday for the What You Said roundup. HTG Explains: Understanding Routers, Switches, and Network Hardware How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To

    Read the article

  • no A record show in the answer section in dig results

    - by eric low
    To check the record for the domain, run dig with domain name as the parameter. dig example.com any I get the below result. Why there is no A record show in the result. What did i do wrong during the setup. Please advice what suppose to look into it. Hope everyone can help me to resolve the case asap. ; <<>> DiG 9.9.3-P2 <<>> example.com any ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44674 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 4, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN ANY ;; ANSWER SECTION: example.com. 3489 IN MX 100 biz.mail.com. example.com. 3482 IN NS ns1.domain.com. example.com. 3482 IN NS ns2.domain.com. ;; AUTHORITY SECTION: example.com. 3482 IN NS ns2.domain.com. example.com. 3482 IN NS ns1.domain.com. ;; Query time: 0 msec ;; SERVER: xxx.252.xxx.xxx#53(xxx.252.xxx.xxx) ;; WHEN: Wed Oct 30 04:48:34 CDT 2013 ;; MSG SIZE rcvd: 349

    Read the article

  • google earth 7, 32bit in 12.10 runs without error but there is no image (globe)

    - by Dennis
    Everything seemed to install fine. I can start google earth and all layers are available, I can even zoom in and look at 3-D buildings. But there is absolutely no image data displayed at all. If you look at the whole globe the outlines are there on an invisible globe. As you zoom in the base looks dark grey almost black but there is no image. I have tried. Tools Options Graphics Safe Mode Tools Options Texture colors all combinations Tools Options Cache (tried several changes to the numbers) lspci shows Display controller: Intel Corporation Mobile 915GM/GMS/910GML Express Graphics Controller (rev 03) Running on a Dell Inspiron 6000 laptop (1.5Gb memory)

    Read the article

  • Mobile music playback on android phone keeps buffering

    - by ianio
    I have copied some music to a ubuntu synced folder on my karmic machine. I have the Ubuntu One android app installed on android 2.2 phone. When listening to music I get frequent breaks whilst the music is buffering, even with full signal. every other music streaming servie such as last fm and bbc player are fine. What is going wrong? mp3s should not have trouble buffering over a good 3g signal. I have set cache size to 200mb but i cant find a buffer size. Is it my mp3s? they are usually in the order of 320kb/s this is a breaker for me if there is no solution which is a massive shame as i like the principal of the software. cheers

    Read the article

  • Apt-Daemon problem due to a broken sun-java6-jre package

    - by Marv
    I am having problems with installation with everything in the software center. Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 968, in simulate trans.unauthenticated = self._simulate_helper(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1092, in _simulate_helper return depends, self._cache.required_download, \ File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 235, in required_download pm.get_archives(fetcher, self._list, self._records) SystemError: E:I wasn't able to locate a file for the sun-java6-jre package. This might mean you need to manually fix this package. Help?

    Read the article

  • How to compile Asus-syntec-webcam-driver stk11xx

    - by berot3
    i have asus v1s and some time ago i was able to easily compile and install it. but now compiling fails... i hope u guys know what i can do, if there is anything that can be done... after svn checkout https://syntekdriver.svn.sourceforge.net/svnroot/syntekdriver syntekdriver and cd ./syntekdriver/trunk/driver as well as make -f Makefile.standalone i got: make -C /lib/modules/2.6.36.2/build SUBDIRS=/home/berot3/syntekdriver/trunk/driver modules make[1]: Betrete Verzeichnis '/usr/src/linux-headers-2.6.38-8-generic' CC [M] /home/berot3/syntekdriver/trunk/driver/stk11xx-usb.o CC [M] /home/berot3/syntekdriver/trunk/driver/stk11xx-v4l.o /home/berot3/syntekdriver/trunk/driver/stk11xx-v4l.c:43:28: fatal error: linux/videodev.h: File not found! compilation terminated. make[2]: *** [/home/berot3/syntekdriver/trunk/driver/stk11xx-v4l.o] Error 1 make[1]: *** [_module_/home/berot3/syntekdriver/trunk/driver] Error 2 make[1]: Verlasse Verzeichnis '/usr/src/linux-headers-2.6.38-8-generic' make: *** [driver] Error 2 same for http://bookeldor-net.info/merdier/Makefile-syntekdriver http://webcache.googleusercontent.com/search?q=cache:OM0zVNlYiVQJ:bookeldor-net.info/merdier/Makefile-syntekdriver+http://bookeldor-net.info/merdier/Makefile-syntekdriver&hl=de&strip=1

    Read the article

  • Is shared hosting enough for feed reader app?

    - by Mahdi
    I'm developing an Feed reader app. Feeds are downloaded to my server when someone requests it. then they download to user's computer. There will be 1-2 minutes cache for popular feeds. We expect that there will be 1000-5000 users (that use our app) Is a shared hosting (which currently serves almost 100 website, and it will serve 500 websites at maximum) enough for this purpose? or we should look for VPS or Dedicated hosting? Note: Our shared hosting plan has unlimited disk space and unlimited bandwidth.

    Read the article

  • Should I tell a departed coworker about their "sev 1" defect?

    - by noahz
    I had a co-worker leave our company recently. Before leaving, he coded a component that had a severe memory leak that caused a production outage (OutOfMemoryError in Java). The problem was essentially a HashMap that grew and never removed entries, and the solution was to replace the HashMap with a cache implementation. From a professional standpoint, I feel that I should let him know about the defect so he can learn from the error. On the other hand, once people leave a company, they often don't want to hear about legacy projects that they have left behind for bigger and better things. What is the general protocol for this sort of situation?

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >