Search Results

Search found 20163 results on 807 pages for 'struct size'.

Page 495/807 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • How can I tell if my Amazon Windows instance was an SQL Server AMI?

    - by Aligma
    I want to purchase some reserved instances, because I have several instances already created and running 24 hours a day. When I go to purchase a Windows instance, I can see 3 options, Windows Windows with SQL Server Standard Windows with SQL server Web I don't know which of these was used to create the original instance. Is there a way I can find out? My assumptions: the instance type is is important because as far as I understand, the way to purchase a reserved instance is to first have a running instance, and then purchase a matching reserved instance. The reserved instance is not itself a new machine, but a kind of contract between you and Amazon to pay for an instance for 1 or 3 years, at a discounted rate. The contracted, reserved instance will "offset" one matching running instance where they have the same size and platform. Please feel free to correct me if these assumptions are incorrect.

    Read the article

  • OpenGL - Stack overflow if I do, Stack underflow if I don't!

    - by Wayne Werner
    Hi, I'm in a multimedia class in college, and we're "learning" OpenGL as part of the class. I'm trying to figure out how the OpenGL camera vs. modelview works, and so I found this example. I'm trying to port the example to Python using the OpenGL bindings - it starts up OpenGL much faster, so for testing purposes it's a lot nicer - but I keep running into a stack overflow error with the glPushMatrix in this code: def cube(): for x in xrange(10): glPushMatrix() glTranslated(-positionx[x + 1] * 10, 0, -positionz[x + 1] * 10); #translate the cube glutSolidCube(2); #draw the cube glPopMatrix(); According to this reference, that happens when the matrix stack is full. So I thought, "well, if it's full, let me just pop the matrix off the top of the stack, and there will be room". I modified the code to: def cube(): glPopMatrix() for x in xrange(10): glPushMatrix() glTranslated(-positionx[x + 1] * 10, 0, -positionz[x + 1] * 10); #translate the cube glutSolidCube(2); #draw the cube glPopMatrix(); And now I get a buffer underflow error - which apparently happens when the stack has only one matrix. So am I just waaay off base in my understanding? Or is there some way to increase the matrix stack size? Also, if anyone has some good (online) references (examples, etc.) for understanding how the camera/model matrices work together, I would sincerely appreciate them! Thanks!

    Read the article

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

  • Dig returns "status: REFUSED" for external queries?

    - by Mikey
    I can't seem to work out why my DNS isn't working properly, if I run dig from the nameserver it functions correctly: # dig ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> ungl.org ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24585 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1 ;; QUESTION SECTION: ;ungl.org. IN A ;; ANSWER SECTION: ungl.org. 38400 IN A 188.165.34.72 ;; AUTHORITY SECTION: ungl.org. 38400 IN NS ns.kimsufi.com. ungl.org. 38400 IN NS r29901.ovh.net. ;; ADDITIONAL SECTION: ns.kimsufi.com. 85529 IN A 213.186.33.199 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Sat Mar 13 01:04:06 2010 ;; MSG SIZE rcvd: 114 but when I run it from another server in the same datacenter I receive: # dig @87.98.167.208 ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> @87.98.167.208 ungl.org ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 18787 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;ungl.org. IN A ;; Query time: 1 msec ;; SERVER: 87.98.167.208#53(87.98.167.208) ;; WHEN: Sat Mar 13 01:01:35 2010 ;; MSG SIZE rcvd: 26 my zone file for this domain is $ttl 38400 ungl.org. IN SOA r29901.ovh.net. mikey.aol.com. ( 201003121 10800 3600 604800 38400 ) ungl.org. IN NS r29901.ovh.net. ungl.org. IN NS ns.kimsufi.com. ungl.org. IN A 188.165.34.72 localhost. IN A 127.0.0.1 www IN A 188.165.34.72 and the named.conf.options is default: options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { ::1; }; listen-on { 127.0.0.1; }; allow-recursion { 127.0.0.1; }; }; named.conf.local: // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization // include "/etc/bind/zones.rfc1918"; zone "eugl.eu" { type master; file "/etc/bind/eugl.eu"; notify no; }; zone "ungl.org" { type master; file "/etc/bind/ungl.org"; notify no; }; The server is running Ubuntu 9.10 and Bind 9, if anyone can shed some light on this for me it'd make me very happy! thanks

    Read the article

  • What is the best aproach for coding in a slow compilation environment

    - by Andrew
    I used to coding in C# in a TDD style - write/or change a small chunk of code, re-compile in 10 seconds the whole solution, re-run the tests and again. Easy... That development methodology worked very well for me for a few years, until a last year when I had to go back to C++ coding and it really feels that my productivity has dramatically decreased since. The C++ as a language is not a problem - I had quite a lot fo C++ dev experience... but in the past. My productivity is still OK for a small projects, but it gets worse when with the increase of the project size and once compilation time hits 10+ minutes it gets really bad. And if I find the error I have to start compilation again, etc. That is just purely frustrating. Thus I concluded that in a small chunks (as before) is not acceptable - any recommendations how can I get myself into the old gone habit of coding for an hour or so, when reviewing the code manually (without relying on a fast C# compiler), and only recompiling/re-running unit tests once in a couple of hours. With a C# and TDD it was very easy to write a code in a evolutionary way - after a dozen of iterations whatever crap I started with was ending up in a good code, but it just does not work for me anymore (in a slow compilation environment). Would really appreciate your inputs and recos. p.s. not sure how to tag the question - anyone is welcome to re-tag the question appropriately. Cheers.

    Read the article

  • How to determine source of file corruption for downloaded images?

    - by sunpech
    I've been downloading Visual Studio 2010 off of the Dreamspark.com website using Akamai Downloader. The .img file is 2.2 GB in size. I've downloaded it twice so far, and when I try to mount it using Gizmo, it complains that "the disk structure is corrupted and unreadable". The drive does mount, but it is unreadable. Is there a way to determine where the source of the data corruption is coming from? Is it my computer as it's receiving it? The hosting server(s)? My ISP? My router? My ethernet cable? It's a hefty download to do again and again from home, only to find out once it's fully downloaded that it's unreadable. I think I can almost rule out my PC, router, and ethernet cable, as I've been able to download various other files without corruption. Note: There is no checksum to verify against

    Read the article

  • partition alignment on fresh windows 2003 ent server

    - by Datapimp23
    Hi, I have this server which has it's physical disks in RAID 5 controlled by a 3com raid controller. size of the stripe unit is unknown for the moment (Can check tomorrow in the office). I need to install windows server 2003 ENT and create 2 partitions (OS, Data). I'd like to create the partitions before the installation on windows server. They have to be aligned properly. I have the newest version of gparted on a disc but I have no clue if this is the right tool. Can someone point me in the right direction? Thanks

    Read the article

  • A DirectoryCatalog class for Silverlight MEF (Managed Extensibility Framework)

    - by Dixin
    In the MEF (Managed Extension Framework) for .NET, there are useful ComposablePartCatalog implementations in System.ComponentModel.Composition.dll, like: System.ComponentModel.Composition.Hosting.AggregateCatalog System.ComponentModel.Composition.Hosting.AssemblyCatalog System.ComponentModel.Composition.Hosting.DirectoryCatalog System.ComponentModel.Composition.Hosting.TypeCatalog While in Silverlight, there is a extra System.ComponentModel.Composition.Hosting.DeploymentCatalog. As a wrapper of AssemblyCatalog, it can load all assemblies in a XAP file in the web server side. Unfortunately, in silverlight there is no DirectoryCatalog to load a folder. Background There are scenarios that Silverlight application may need to load all XAP files in a folder in the web server side, for example: If the Silverlight application is extensible and supports plug-ins, there would be a /ClinetBin/Plugins/ folder in the web server, and each pluin would be an individual XAP file in the folder. In this scenario, after the application is loaded and started up, it would like to load all XAP files in /ClinetBin/Plugins/ folder. If the aplication supports themes, there would be a /ClinetBin/Themes/ folder, and each theme would be an individual XAP file too. The application would qalso need to load all XAP files in /ClinetBin/Themes/. It is useful if we have a DirectoryCatalog: DirectoryCatalog catalog = new DirectoryCatalog("/Plugins"); catalog.DownloadCompleted += (sender, e) => { }; catalog.DownloadAsync(); Obviously, the implementation of DirectoryCatalog is easy. It is just a collection of DeploymentCatalog class. Retrieve file list from a directory Of course, to retrieve file list from a web folder, the folder’s “Directory Browsing” feature must be enabled: So when the folder is requested, it responses a list of its files and folders: This is nothing but a simple HTML page: <html> <head> <title>localhost - /Folder/</title> </head> <body> <h1>localhost - /Folder/</h1> <hr> <pre> <a href="/">[To Parent Directory]</a><br> <br> 1/3/2011 7:22 PM 185 <a href="/Folder/File.txt">File.txt</a><br> 1/3/2011 7:22 PM &lt;dir&gt; <a href="/Folder/Folder/">Folder</a><br> </pre> <hr> </body> </html> For the ASP.NET Deployment Server of Visual Studio, directory browsing is enabled by default: The HTML <Body> is almost the same: <body bgcolor="white"> <h2><i>Directory Listing -- /ClientBin/</i></h2> <hr width="100%" size="1" color="silver"> <pre> <a href="/">[To Parent Directory]</a> Thursday, January 27, 2011 11:51 PM 282,538 <a href="Test.xap">Test.xap</a> Tuesday, January 04, 2011 02:06 AM &lt;dir&gt; <a href="TestFolder/">TestFolder</a> </pre> <hr width="100%" size="1" color="silver"> <b>Version Information:</b>&nbsp;ASP.NET Development Server 10.0.0.0 </body> The only difference is, IIS’s links start with slash, but here the links do not. Here one way to get the file list is read the href attributes of the links: [Pure] private IEnumerable<Uri> GetFilesFromDirectory(string html) { Contract.Requires(html != null); Contract.Ensures(Contract.Result<IEnumerable<Uri>>() != null); return new Regex( "<a href=\"(?<uriRelative>[^\"]*)\">[^<]*</a>", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant) .Matches(html) .OfType<Match>() .Where(match => match.Success) .Select(match => match.Groups["uriRelative"].Value) .Where(uriRelative => uriRelative.EndsWith(".xap", StringComparison.Ordinal)) .Select(uriRelative => { Uri baseUri = this.Uri.IsAbsoluteUri ? this.Uri : new Uri(Application.Current.Host.Source, this.Uri); uriRelative = uriRelative.StartsWith("/", StringComparison.Ordinal) ? uriRelative : (baseUri.LocalPath.EndsWith("/", StringComparison.Ordinal) ? baseUri.LocalPath + uriRelative : baseUri.LocalPath + "/" + uriRelative); return new Uri(baseUri, uriRelative); }); } Please notice the folders’ links end with a slash. They are filtered by the second Where() query. The above method can find files’ URIs from the specified IIS folder, or ASP.NET Deployment Server folder while debugging. To support other formats of file list, a constructor is needed to pass into a customized method: /// <summary> /// Initializes a new instance of the <see cref="T:System.ComponentModel.Composition.Hosting.DirectoryCatalog" /> class with <see cref="T:System.ComponentModel.Composition.Primitives.ComposablePartDefinition" /> objects based on all the XAP files in the specified directory URI. /// </summary> /// <param name="uri"> /// URI to the directory to scan for XAPs to add to the catalog. /// The URI must be absolute, or relative to <see cref="P:System.Windows.Interop.SilverlightHost.Source" />. /// </param> /// <param name="getFilesFromDirectory"> /// The method to find files' URIs in the specified directory. /// </param> public DirectoryCatalog(Uri uri, Func<string, IEnumerable<Uri>> getFilesFromDirectory) { Contract.Requires(uri != null); this._uri = uri; this._getFilesFromDirectory = getFilesFromDirectory ?? this.GetFilesFromDirectory; this._webClient = new Lazy<WebClient>(() => new WebClient()); // Initializes other members. } When the getFilesFromDirectory parameter is null, the above GetFilesFromDirectory() method will be used as default. Download the directory’s XAP file list Now a public method can be created to start the downloading: /// <summary> /// Begins downloading the XAP files in the directory. /// </summary> public void DownloadAsync() { this.ThrowIfDisposed(); if (Interlocked.CompareExchange(ref this._state, State.DownloadStarted, State.Created) == 0) { this._webClient.Value.OpenReadCompleted += this.HandleOpenReadCompleted; this._webClient.Value.OpenReadAsync(this.Uri, this); } else { this.MutateStateOrThrow(State.DownloadCompleted, State.Initialized); this.OnDownloadCompleted(new AsyncCompletedEventArgs(null, false, this)); } } Here the HandleOpenReadCompleted() method is invoked when the file list HTML is downloaded. Download all XAP files After retrieving all files’ URIs, the next thing becomes even easier. HandleOpenReadCompleted() just uses built in DeploymentCatalog to download the XAPs, and aggregate them into one AggregateCatalog: private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { using (StreamReader reader = new StreamReader(e.Result)) { string html = reader.ReadToEnd(); IEnumerable<Uri> uris = this._getFilesFromDirectory(html); Contract.Assume(uris != null); IEnumerable<DeploymentCatalog> deploymentCatalogs = uris.Select(uri => new DeploymentCatalog(uri)); deploymentCatalogs.ForEach( deploymentCatalog => { this._aggregateCatalog.Catalogs.Add(deploymentCatalog); deploymentCatalog.DownloadCompleted += this.HandleDownloadCompleted; }); deploymentCatalogs.ForEach(deploymentCatalog => deploymentCatalog.DownloadAsync()); } } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } // Exception handling. } In HandleDownloadCompleted(), if all XAPs are downloaded without exception, OnDownloadCompleted() callback method will be invoked. private void HandleDownloadCompleted(object sender, AsyncCompletedEventArgs e) { if (Interlocked.Increment(ref this._downloaded) == this._aggregateCatalog.Catalogs.Count) { this.OnDownloadCompleted(e); } } Exception handling Whether this DirectoryCatelog can work only if the directory browsing feature is enabled. It is important to inform caller when directory cannot be browsed for XAP downloading. private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { // No exception thrown when browsing directory. Downloads the listed XAPs. } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } WebException webException = error as WebException; if (webException != null) { HttpWebResponse webResponse = webException.Response as HttpWebResponse; if (webResponse != null) { // Internally, WebClient uses WebRequest.Create() to create the WebRequest object. Here does the same thing. WebRequest request = WebRequest.Create(Application.Current.Host.Source); Contract.Assume(request != null); if (request.CreatorInstance == WebRequestCreator.ClientHttp && // Silverlight is in client HTTP handling, all HTTP status codes are supported. webResponse.StatusCode == HttpStatusCode.Forbidden) { // When directory browsing is disabled, the HTTP status code is 403 (forbidden). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_ClientHttp, webException); } else if (request.CreatorInstance == WebRequestCreator.BrowserHttp && // Silverlight is in browser HTTP handling, only 200 and 404 are supported. webResponse.StatusCode == HttpStatusCode.NotFound) { // When directory browsing is disabled, the HTTP status code is 404 (not found). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_BrowserHttp, webException); } } } this.OnDownloadCompleted(new AsyncCompletedEventArgs(error, cancelled, this)); } Please notice Silverlight 3+ application can work either in client HTTP handling, or browser HTTP handling. One difference is: In browser HTTP handling, only HTTP status code 200 (OK) and 404 (not OK, including 500, 403, etc.) are supported In client HTTP handling, all HTTP status code are supported So in above code, exceptions in 2 modes are handled differently. Conclusion Here is the whole DirectoryCatelog’s looking: Please click here to download the source code, a simple unit test is included. This is a rough implementation. And, for convenience, some design and coding are just following the built in AggregateCatalog class and Deployment class. Please feel free to modify the code, and please kindly tell me if any issue is found.

    Read the article

  • Black screen on Ubuntu 12.04

    - by user1648371
    I've just upgraded to Ubuntu 12.04 and I'm experiencing some problems. The first thing I noticed is that when I click the Workspace switcher all I get is a black screen (I can guess where the different workspaces are located and clicked on them, not a practical solution though). In addition when I lock the screen or suspend the laptop (a Vaio VPCEB4M1E) I get a shifted screen (I see the right most vertical stripe on the left size of the monitor and nothing about all the rest, to put it clearly I can see the gear that allows me to turn the pc off, etc, but not much more..) when I go to the additional driver menu I see the "ATI/AMD proprietary FGLRX graphics driver" are installed and the post-release update version is available. I don't know if the problem is driver related, so before doing anything I'd like to get some suggestions from you guys. Thanks you!

    Read the article

  • Slow NFS transfer performance of small files

    - by Arie K
    I'm using Openfiler 2.3 on an HP ML370 G5, Smart Array P400, SAS disks combined using RAID 1+0. I set up an NFS share from ext3 partition using Openfiler's web based configuration, and I succeeded to mount the share from another host. Both host are connected using dedicated gigabit link. Simple benchmark using dd: $ dd if=/dev/zero of=outfile bs=1000 count=2000000 2000000+0 records in 2000000+0 records out 2000000000 bytes (2.0 GB) copied, 34.4737 s, 58.0 MB/s I see it can achieve moderate transfer speed (58.0 MB/s). But if I copy a directory containing many small files (.php and .jpg, around 1-4 kB per file) of total size ~300 MB, the cp process ends in about 10 minutes. Is NFS not suitable for small file transfer like above case? Or is there some parameters that must be adjusted?

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 18 (sys.dm_io_virtual_file_stats)

    - by Tamarick Hill
    The sys.dm_io_virtual_file_stats Dynamic Management Function is used to return IO statistic information about each of your database files on your server. As input parameters, this function takes a database_id and a file_id. If you want to return IO statistic information for all files, you can simply pass in NULL values for both of these. Let’s have a look at this function  and examine its results: SELECT db_name(database_id) DatabaseName, * FROM sys.dm_io_virtual_file_stats(NULL, NULL) The first column in the result set is the DatabaseName which is just a column I created using the db_name() system function and the database_id column from this function. Next we have a file_id which represent the ID for the file, whether it be a data file or transaction log file. The ‘sample_ms’ column represents the total time in milliseconds that the instance has been up and running. Next we have the ‘num_of_reads’, ‘num_of_bytes_read’, and later ‘num_of_writes’, and ‘num_of_bytes_written’. These columns represent the number of reads or writes and number of bytes read or written against a particular file. These columns are beneficial when determining how often a particular file is being accessed. The ‘io_stall_read_ms’ and io_stall_write_ms’ columns each represent the the total time in milliseconds that users have had to wait for reads or writes against a file respectively. The ‘io_stall’ column is the sum of both read and write io stalls. The ‘size_on_disk_bytes’ column represents the size of the respective file on your disk subsystem. Lastly the ‘file_handle’ column is simply the Windows File handle. This Dynamic Management Function is useful when you are needing to analyze your database files for the purposes of segregating high IO databases. This DMF gives you a good view of which of your database files are being accessed the most and which ones may be generating the largest IO stalls. These could be your best candidates for moving into separate IO channels. For more information about this DMF, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190326.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Mysqldump causes "Too many connections"

    - by vbachev
    A scheduled backup using mysqldump on one of our databases is causing Too many connections. The database is of both InnoDB and MyISAM tables with size of around 500Mb. The Too many connections appears for about 2-3 minutes We understand that mysqldump locks the tables and causes all other queries and connections to pile up and jam the mysql server. We need frequent backups and we cannot afford server downtime or putting websites in maintenance mode while doing it. Our websites are global and traffic is high all the time so its hard to find a moment for backups. How can we avoid downtime during backups?Is there maybe a way to use mysqldump in way that it will not lock all tables at the same time?Is there an alternative to backing up with mysqldump?

    Read the article

  • Nginx & Passenger - failed (11: Resource temporarily unavailable) while connecting to upstream

    - by Toby Hede
    I have an Nginx and Passenger setup that is proving problematic. At relatively low loads the server seems to get backed up and start churning results like this into the error.log: connect() to unix:/passenger_helper_server failed (11: Resource temporarily unavailable) while connecting to upstream My passenger setup is: passenger_min_instances 2; passenger_pool_idle_time 1200; passenger_max_pool_size 20; I have done some digging, and it looks like the CPU gets pegged. Memory usage seems fine passenger_memory_stats shows at most about 700MB being used, but CPU approaches 100%. is this enough to cause this type of error? Should i bring the pool size down? Are there other configuration settings I should be looking at? Any help appreciated Other pertinent information: Amazon EC2 Small Instance Ubuntu 10.10 Nginx (latest stable) Passenger (latest stable) Rails 3.0.4

    Read the article

  • Unable to install updates on 14.04 LTS

    - by Mike
    I have been getting update notifications for a few weeks now but whenever I attempt to install them I get this message; The upgrade needs a total of 74.6 M free space on disk '/boot'. Please free at least an additional 29.8 M of disk space on '/boot'. Empty your trash and remove temporary packages of former installations using 'sudo apt-get clean'. First of all I don't have permission to access /boot (don't know why as its a standalone machine and i'm the only user). Secondly, I emptied the trash; Thirdly, I launched Terminal and entered sudo apt-get clean I was a asked for a sudo password. I entered my system password. Re-entered sudo apt-get clean. The cursor stopped blinking - I assumed it was doing it's "thing". I let it go for about 10 minutes then exited Terminal. Tried to install the updates but just got the same message. Is there something i'm ignorant of? This is the output I get from the command df -h and I have no idea what it all means! @Tim, What's bash and why am I denied access to fstab and /boot? mike@mike-MS-7800:~$ /etc/fstab bash: /etc/fstab: Permission denied mike@mike-MS-7800:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu--vg-root 913G 11G 856G 2% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 1.7G 4.0K 1.7G 1% /dev tmpfs 335M 1.6M 333M 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 1.7G 14M 1.7G 1% /run/shm none 100M 52K 100M 1% /run/user /dev/sda2 237M 182M 43M 81% /boot /dev/sda1 487M 3.4M 483M 1% /boot/efi /dev/sr1 31M 31M 0 100% /media/mike/Optus Mobile mike@mike-MS-7800:~$ I ran this from the terminal and all is now working. dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge

    Read the article

  • Mac OS X - rmdir fails with "Operation not permitted" for a folder created by a PC on a removable dr

    - by maxint
    Hello. I have a problem (using Mac OS X 10.5.8) with the access rights of a folder that was presumably created by a virus on a disk-on-key drive when I used it with a PC. I can't remove the folder or change it's name. In Finder's Info window the Lock box is unchecked and uncheckable - if I try to check it it flips back to off. Please see the details: MaxBookAir:GARMIN'S maxint$ rmdir winamp_cache_0001/ rmdir: winamp_cache_0001/: Operation not permitted MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ mv winamp_cache_0001 test mv: rename winamp_cache_0001 to test: Operation not permitted MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ GetFileInfo winamp_cache_0001 directory: "/Volumes/GARMIN'S/winamp_cache_0001" attributes: avbstclinmedz created: 12/23/2009 14:34:52 modified: 02/13/2010 22:52:36 MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ stat -x winamp_cache_0001 File: "winamp_cache_0001" Size: 32768 FileType: Directory Mode: (0777/drwxrwxrwx) Uid: ( 502/ maxint) Gid: ( 20/ staff) Device: 14,5 Inode: 7439 Links: 1 Access: Wed Dec 23 00:00:00 2009 Modify: Sat Feb 13 22:52:36 2010 Change: Sat Feb 13 22:52:36 2010 MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ stat -r winamp_cache_0001 234881029 7439 040777 1 502 20 0 32768 1261506600 1266081756 1266081756 1261559092 131072 64 32768 winamp_cache_0001 MaxBookAir:GARMIN'S maxint$ MaxBookAir:GARMIN'S maxint$ ls -lTd winamp_cache_0001/ drwxrwxrwx 1 maxint staff 32768 Feb 13 22:52:36 2010 winamp_cache_0001/ MaxBookAir:GARMIN'S maxint$

    Read the article

  • C string question

    - by user208454
    I am writing a simple c program which reverses a string, taking the string from argv[1]. Here is the code: #include <stdio.h> #include <stdlib.h> #include <string.h> char* flip_string(char *string){ int i = strlen(string); int j = 0; // Doesn't really matter all I wanted was the same size string for temp. char* temp = string; puts("This is the original string"); puts(string); puts("This is the \"temp\" string"); puts(temp); for(i; i>=0; i--){ temp[j] = string[i] if (j <= strlen(string)) { j++; } } return(temp); } int main(int argc, char *argv[]){ puts(flip_string(argv[1])); printf("This is the end of the program\n"); } That's basically it, the program compiles and everything but does not return the temp string in the end (just blank space). In the beginning it prints temp fine when its equal to string. Furthermore if I do a character by character printf of temp in the for loop the correct temp string in printed i.e. string - reversed. just when I try to print it to standard out (after the for loop/ or in the main) nothing happens only blank space is printed.

    Read the article

  • Make Your Clock Creates a Custom Clock for your Android Homescreen

    - by ETC
    If you’d like to create a custom clock face your Android homescreen Make Your Clock makes it easy to create a clock face with customized colors, font, display style, and more. You can create a clock that looks like a digital watch face, an old fashioned flip clock, a combination of digital output and date, and other variations. You can also adjust the size of the clock to anywhere between 1×1 to 4×2. Currently the app is limited to displaying the time and date, future releases are slated to include weather and lunar phases in addition to the time. Check out the video below to see the app in action: Make Your Clock [AppBrain via Yahoo!] Latest Features How-To Geek ETC How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The Legend of Zelda – 1980s High School Style [Video] Suspended Sentence is a Free Cross-Platform Point and Click Game Build a Batman-Style Hidden Bust Switch Make Your Clock Creates a Custom Clock for your Android Homescreen Download the Anime Angels Theme for Windows 7 CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate

    Read the article

  • Aspect Ratio on Nero 9 for burning DVD

    - by Tara
    I am currently attempting to burn a screen capture file to DVD. I will admit that I know very little about the process, the terminology, and am at a loss of how to find this information. I am using Nero 9 and am very displeased that the manuals available to me online explain very little. My current problem is that when I burn to DVD, my beautiful screen capture ends up being cropped. Through endless amounts of googling I am under the impression that this is due to aspect ratio. However, as windows will not tell me the resolution size for me to determine the correct aspect ratio I do not know how to proceed. Is there a way using Nero 9 for me to be able to burn my screen capture to DVD? Any advice or suggestions are appreciated.

    Read the article

  • JavaScript local alias pattern

    - by Bertrand Le Roy
    Here’s a little pattern that is fairly common from JavaScript developers but that is not very well known from C# developers or people doing only occasional JavaScript development. In C#, you can use a “using” directive to create aliases of namespaces or bring them to the global scope: namespace Fluent.IO { using System; using System.Collections; using SystemIO = System.IO; In JavaScript, the only scoping construct there is is the function, but it can also be used as a local aliasing device, just like the above using directive: (function($, dv) { $("#foo").doSomething(); var a = new dv("#bar"); })(jQuery, Sys.UI.DataView); This piece of code is making the jQuery object accessible using the $ alias throughout the code that lives inside of the function, without polluting the global scope with another variable. The benefit is even bigger for the dv alias which stands here for Sys.UI.DataView: think of the reduction in file size if you use that one a lot or about how much less you’ll have to type… I’ve taken the habit of putting almost all of my code, even page-specific code, inside one of those closures, not just because it keeps the global scope clean but mostly because of that handy aliasing capability.

    Read the article

  • Windows 7 - Ubuntu 10.10 Dual Boot Partitioning Recommendation for HP Laptop OEM

    - by Denja
    Hi Linux Community, After been temporary impressed with the newb Windows 7 and after intensly using it I find my self struggling with the ever slow and buggy windows OS once again. It's Time to go with the Ubuntu/Linux way for a better and faster tomorrow. Unfortunately in my country most Users/Business use Windows based Systems. As a Computer technician i want to learn and use both Systems and possibly introduce New users to more affordable Linux Based Systems. For now I want to create dual-boot or even triple boot layouts on my laptop machine Here's the layout in use now: * (C:) Windows 7 system partition NTFS - 284,89GB (Primary,Boot,Pagefile,Dump) * HP_TOOLS system partition FAT32 - 99MB (Primary) * (D:) RECOVERY partition NTFS - 12,90GB (Primary) * SYSTEM partition NTFS 199MB (Primary) Here's the layout I want to make. * (C:) Windows 7 system partition NTFS - 60GB (Primary) (sda1) * (D:) Windows data partition (user files) NTFS - 60GB(Extended or Primary)(sda2);wanna share with Linux * Linux root Ext4 - 10GB (Primary)(sda3) * Linux swap swap- RAM size, 3GB (sda4) * Linux home Ext4- 164,9GB (Extended)(sda5) Question 1: Is the layout that i want to make correct as the Primary and Extended Partition concerns ? Question 2: Can I definitely get rid of SYSTEM Boot loader of windows? Question 3: If I get rid of HP_TOOLS and RECOVERY partition will it be a problem? Question 4: Based on my layout what is your suggestion for a Triple Boot layout for OSX or Puppy Linux? Thank you in advance for your advises and suggestions.

    Read the article

  • Ubuntu VM Guest - Samba Service Not Accessible from VM Host via Hostname

    - by phalacee
    I have a Windows 7 Workstation with a Ubuntu 10.10 VM running in Virtual Box 3.2.12 r68302. I recently updated Samba and winbind, and since the update, I am unable to access the machine via it's hostname (\mystique) from the VM Host. I can access it by the "Host-only" IP (\192.168.56.101) and the DHCP Assigned IP address (\10.1.1.20) and I can connect to the webserver on the machine via it's hostname (http://mystique/). As stated, accessing this machine via it's hostname worked fine prior to the update, but has since stopped working. I have added the hostname to the smb.conf for the netbios name, to no avail. My smb.conf [global] section looks like this: workgroup = NETWORK netbios name = Mystique server string = %h server (Samba, Ubuntu) dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user usershare allow guests = yes

    Read the article

  • MySQL – Beginning Temporary Tables in MySQL

    - by Pinal Dave
    MySQL supports Temporary tables to store the resultsets temporarily for a given connection. Temporary tables are created with the keyword TEMPORARY along with the CREATE TABLE statement. Let us create the temporary table named Temp CREATE TEMPORARY TABLE TEMP (id INT); Now you can find out the column names using DESC command DESC TEMP; The above returns the following result This table can be accessed only for the current connection and it can be used like a permanent table and automatically dropped when the connection is closed. However, you can not find temporary tables using INFORMATION_SCHEMA. TABLES system view. It will only list out the permanent tables. MySQL usually stores the data of temporary tables in memory and processed by Memory Storage engine. But if the data size is too large MySQL automatically converts this to the on – disk table and use MyISAM engine. You can also create a permanent table with the same name of a temporary table in the same connection. However the structure of permanent table is visible only if the temporary table with the same name is dropped. Let us create a permanent table with the same name Temp as below CREATE TABLE TEMP (id INT, names VARCHAR(100)); Now running the following command stills gives you the structure of the temporary table temp created earlier. DESC TEMP; You can drop the temporary table using DROP TEMPORARY TABLE command; DROP TEMPORARY TABLE TEMP; After you executed the temporary table, run the following command DESC TEMP; Now you will see the structure of the permanent table named temp In summary – If there is a Temporary Table in MySQL it gets first priority over the permanent table in the session. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • JVM memory initializazion error after windows update

    - by gianni
    We have three Windows Server 2003 with 2 GB RAM. Server1 tomcat 5.5.25 jvm version SUN 1.6.0_11-b03 Server2 tomcat 5.5.25 jvm version SUN 1.6.0_14-b08 Server3 tomcat 6.0.18 jvm version SUN 1.6.0_14-b08 For the three servers JVM parameters are: -XX:MaxPermSize=256m -Dcatalina.base=C:\Programmi\Apache Group\apache-tomcat-5.5.25 -Dcatalina.home=C:\Programmi\Apache Group\apache-tomcat-5.5.25 -Djava.endorsed.dirs=C:\Programmi\Apache Group\apache-tomcat-5.5.25\common\endorsed -Djava.io.tmpdir=C:\Programmi\Apache Group\apache-tomcat-5.5.25\temp vfprintf -Xms512m -Xmx1024m For some months everithing worked fine. Last friday we installed some windows updates. After the reboot tomcat doesnt start with error: Error occurred during initialization of VM Could not reserve enough space for object heap We reduced the parameter -Xmx1024m to -Xmx768m and now tomcat starts. But we need greater max heap size What happened to our servers ? Thanks in advance.

    Read the article

  • Where is the Mac Divx Web Player 7 cache folder?

    - by user30710
    Until recently, I was using Divx web player 1.4.2 because it seemed to be the least buggy. It was saving files in users/xxxxxx/movies/divx movies/temporary added files and was deleting them when the cache limit was reached. Now with 7, it's saving them alright cause I can watch my HD space go down, but I can't find them. And it's not respecting the cache limit size (mine is 4GB). The only way to clear up this space is a restart of the Mac. I'm running 10.6.8and Chrome. I've looked everywhere for the folder manually. Where is it?

    Read the article

  • Game Changer Appliance for SMBs Powered by Oracle Linux

    - by Zeynep Koch
    In the November 28th CRN article  Review: Thumbs-Up On Oracle Database Appliance  , Edward F. Moltzen mentions that "The Test Center likes this appliance (Oracle Database Appliance) , for the performance and for the strong security offered by the underlying Oracle Linux in the box. It’s more than a solid offering for the SMB space; it’s potentially a game-changer as data and security needs race to keep up with the oncoming generations of technology." The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g—in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and network that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. All hardware and software components are supported by a single vendor—Oracle—and offer customers unique pay-as-you-grow software licensing to quickly scale from 2 processor cores to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades. It is: Simple—Complete plug-and-go hardware and software Reliable—Advanced management features and single-vendor support Affordable—Pay-as-you-grow platform for small database consolidation The Oracle Database Appliance is a 4U rack-mountable system pre-installed with Oracle Linux and Oracle appliance manager software. Redundancy is built into all components and the Oracle appliance manager software reduces the risk and complexity of deploying highly available databases. It's perfect for consolidating OLTP and data warehousing databases up to 4 terabytes in size, making it ideal for midsize companies or departmental systems. Read more about Oracle's Database Appliance  Read more about Oracle Linux

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >