Search Results

Search found 17944 results on 718 pages for 'size t'.

Page 439/718 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Broken Package on Update Manager

    - by Widy Graycloud
    I dont know what's wrong with my update manager.. It says that the softwares that I installed was broken. Maybe because I force shutdown my laptop, because Ubuntu wont shutdown,showing up desktop wallpaper but not title bar and launcher, but It won't shut down (+that's another bug). I've just update the broken softwares. the size is 60 to 70 MB.. But It doesn't work. Now I cannot update or install any software from Update Manager or Ubuntu Software Center. Can anybody tellme what's wrong? This is what appears when I use Update Manager I use Ubuntu Software Center, and this message appeared I chose repair and when it update the broken softwares using Ubuntu Software Center. It failed. And show up this message. The problem is I can't update or install any program from Ubuntu Software Center and Device Manager anymore. (I closed allprograms include ubuntu software center,and device manager in this case). Some one helpme? I tried to use apt-get install -f in terminal but it shows message like this: E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

    Read the article

  • HTTP Caching Server that supports POST

    - by Jeroen
    I am hosting a REST service which is sending appropriate cache-control headers. I use Varnish as a caching server in front of my webserver. However, a limitation of varnish is that it doesn't support caching HTTP POST and HTTP PUT. Is there any alternate caching server that will be able to cache these requests? I understand that caching POST is a bit tricky because you cannot just cache based on the url as a key like for GET; it needs to actually inspect the request body. In case of multipart/form-data requests, there should probably be a limit on the size of the request body for it to be cached (so that big file uploads, etc won't be cached). Nevertheless I really want to be able to cache short HTTP POST, or at least the application/x-www-form-urlencoded ones.

    Read the article

  • subversion instillation on centos 5.8

    - by user57221
    I am trying to install subversion on centos 5.8 usingyum install subversion and it is throwing the error below. ..... .... Total size: 7.3 M Is this ok [y/N]: y Downloading Packages: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: libapr-1.so.0()(64bit) is needed by subversion-1.6.11-10.el5_8.x86_64 libaprutil-1.so.0()(64bit) is needed by subversion-1.6.11-10.el5_8.x86_64 libapr-1.so.0()(64bit) is needed by (installed) mod_perl-2.0.4-6.el5.x86_64 apr is needed by (installed) httpd-2.2.22-12051516.x86_64 /usr/lib64/libapr-1.so.0 is needed by (installed) httpd-2.2.22-12051516.x86_64 libaprutil-1.so.0()(64bit) is needed by (installed) mod_perl-2.0.4-6.el5.x86_64 apr-util is needed by (installed) httpd-2.2.22-12051516.x86_64 /usr/lib64/libaprutil-1.so.0 is needed by (installed) httpd-2.2.22-12051516.x86_64 Complete! (1, [u'Please report this error in http://bugs.centos.org/yum5bug']) How do i resolve this?

    Read the article

  • How to get feedback from the community on large chunks of code?

    - by MainMa
    Code Review.SE is great when you need feedback on a precise, short piece of code. But where to get similar feedback about the code itself when: you have thousands of LOC, don't have colleagues in your workplace ready or willing to review the code¹, don't have thousands of dollars to spend for a professional review by a third party developer?² Places like CodePlex are a good idea to get your project known³, but from what I've seen, the feedback you get on known projects are consumer feedback, i.e. concerns the bugs and feature requests, not the quality of the source code itself. What are the social way to get the community involved in the code review of the codebase of a certain size for an open source project which doesn't have the scale of Firefox or similar products? ¹ Which is the case for most personal and open source projects, or projects done in companies where the practice of regular and complete code review is nonexistent. ² Which is, again, the case for most personal and open source projects. ³ Even if too many projects published on CodePlex never get known, either because nobody cares or because they are presented not very well.

    Read the article

  • Access to self created torrent on public tracker

    - by Nick
    Not sure if this is the right site to ask this, but here it goes: Let's say I'd like to share a couple of private files with a few friends. The size of these are quite large, so I've figured the best route to distribute these is via torrent. So, on my home PC I create a torrent and start seeding and announce to a public tracker like openbittorrent and publicbt. Now, both of those are public trackers, but they don't seem to have anyway of searching through what is actually being tracked. If I'm only passing around the torrent file to a few friends, whats the chances that someone else will 'randomly' come across the torrent via the public tracker and start leeching?

    Read the article

  • Mounting LVM2 volume with XFS filesystem

    - by Chris
    Unfortunately I'm not able to access the data on my NAS anymore. I can't figure out why this is the case as I haven't changed anything. So I plugged one of the harddisks in my computer to access the data. What I did: kpartx -a /dev/sdc Now I should be able to access /dev/mapper/vg001-lv001 When trying to mount it I get: sudo mount -t xfs /dev/mapper/vg001-lv001 /home/user/mnt mount: /dev/mapper/vg001-lv001: can't read superblock Now I did a parted -l which gave me Modell: Linux device-mapper (linear) (dm) Festplatte /dev/mapper/vg001-lv001: 498GB Sektorgröße (logisch/physisch): 512B/512B Partitionstabelle: loop Number Begin End Size Filesystem Flags 1 0,00B 498GB 498GB xfs Does anybody have a solution how to recover the data?

    Read the article

  • Terrible Performance with SATA Drives on Dell PowerEdge, steps to troubleshoot?

    - by Tom
    I had asked this question earlier and the question went missing so here it is again. Bought a DELL Poweredge 2950 to use as in-house QA Server. Disk performance is beyond terrible, 1000-4000 ms response time on the drive with our SQL Server database .mdf. Sql Server disk queue upwards of 300 at times. I'm a software guy, can anyone help me with steps to determine the issue? I don't know what RAID controller it has, how can I determine that? I'm speculating it could be BIOS issue. Perhaps the server used to have another kind of drive in it and when I added SATA the ??? buffer size is wrong??? Perhaps I chose wrong options (chose defaults) when setting up the RAID 1 arrays? I thought RAID 1 was a performance array?

    Read the article

  • Sorting IPv4 Addresses

    - by Kumba
    So I've run into a quandary on sorting IPv4 addresses, and didn't know if there was a set rule in some obscure networking document. Do I do a straight sort on the raw address only (such as converting the IP address to a 32bit number and then sorting), do I factor in the CIDR via some mathematical formula, do I sort via the CIDR only (as if I'm comparing the network size and not the addresses directly)? I.e., normal math, we'd do something like -1 < 0 < 1 to denote the order of precedence. Given say, 10.1.0.0/16, 172.16.0.0/12, 192.168.1.0/24, and 192.168.1.42, what would be the order of precedence?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-26

    - by Bob Rhubart
    Oracle Introduces Free Version of Oracle Application Development Framework Several community bloggers have already written about Oracle Application Development Framework (ADF) Essentials, the free version of Oracle ADF. Here's the official press release. ADF Essentials - Quick Technical Review | Andrejus Baranovskis "This post is just a quick review for ADF Essentials on Glassfish," says Oracle ACE Director Andrejus Baranovskis. "I will do a proper performance test soon to compare ADF performance on 5 ways to think like a cloud architect | ZDNet "Is enterprise architecture ready for the cloud? Is the cloud ready for EA?" Joe McKendrick asks. "Cloud represents a different way of thinking. But we've been here before." Configuring trace file size and number in WebCenter Content 11g | Kyle Hatlestad A quick tip from Oracle Fusion Middleware A-Team member Kyle Hatlestad. Thought for the Day "Elegance is not a dispensable luxury but a factor that decides between success and failure." — Edsger W. Dijkstra (May 11, 1930 – August 6, 2002) Source: SoftwareQuotes.com

    Read the article

  • SEO and external sites that serve responsive images (like Re-SRC)

    - by Baumr
    Re-SRC is a tool that allows you to automatically serve responsive images for your website from their cloud servers. It delivers a new image file each time the browser window (viewport) is resized. To use it in your HTML when linking to an image, you would do the following: <img src="http://app.resrc.it//www.your-domain.com/img/img001.jpg"/> Some more background for SEO considerations: As an example, looking at their demo page's code, the src of the Arc de Triomphe photo — when the browser window is resized to be at a tablet-width — shows this particular file at it's widest. It is found under the following URL: http://app4-uk.resrc.it/s=w560,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If the viewport is increased to desktop-width, then a smaller image is served in line with the design; see this URL: http://app4-uk.resrc.it/s=w320,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If I change the viewport to be about half-way between those two, then the image's URL is: http://app4-uk.resrc.it/s=w240,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg In other words, I found that there is a separate file for every 10-pixel increment of the image width. Very cool for saving bandwidth on mobile devices and service responsive/retina images on others, but... Here are two problems I see for SEO: The img on your site, part of your semantic markup, will not be hosted on your site at all, or even a server you control. Any links to these images will pass on "link juice" to Re-SRC's site instead. You are serving a vast array of different image files to different people — some may link to one, others to another size. Then there's the question of what different search engine crawlers will see. Also: There seems to be no fallback option if their servers are down. Do you see any other concerns? Or, perhaps, do you not see those as concerns?

    Read the article

  • How to print 4 index cards on a single A4 sheet in Word 2003

    - by Anna
    I have an index card designed in Word. It's fairly complicated with graphics, borders and background. The page layout has been set to landscape and with size set to 4x6. How can I print this, 4x per A4 landscape sheet? I cannot for the life of me work it out. The printer always seems to do a single card per A4 sheet, wasting 3/4 of the page. "Pages 1,1,1,1" will result in 4 sheets being printed. What am I doing wrong?

    Read the article

  • uploading large files (mp4) to IIS 7.5 gives 500 Internal Server Error

    - by dragon112
    I made a website on which i need to be able to upload video files and it has worked for quite a while. However after a while it just stopped working and now it will give me the following IIS error message when i upload a video. Images do work (possibly due to their smaller size). I use an html form with PHP server sided script to upload. I have already set the user permissions for the entire inetpub to allow all actions for the IIS user. If you have any idea what it could be PLEASE tell me, have been trying to fix this for weeks now. Thanks in advance!

    Read the article

  • Dealing with Fine-Grained Cache Entries in Coherence

    - by jpurdy
    On occasion we have seen significant memory overhead when using very small cache entries. Consider the case where there is a small key (say a synthetic key stored in a long) and a small value (perhaps a number or short string). With most backing maps, each cache entry will require an instance of Map.Entry, and in the case of a LocalCache backing map (used for expiry and eviction), there is additional metadata stored (such as last access time). Given the size of this data (usually a few dozen bytes) and the granularity of Java memory allocation (often a minimum of 32 bytes per object, depending on the specific JVM implementation), it is easily possible to end up with the case where the cache entry appears to be a couple dozen bytes but ends up occupying several hundred bytes of actual heap, resulting in anywhere from a 5x to 10x increase in stated memory requirements. In most cases, this increase applies to only a few small NamedCaches, and is inconsequential -- but in some cases it might apply to one or more very large NamedCaches, in which case it may dominate memory sizing calculations. Ultimately, the requirement is to avoid the per-entry overhead, which can be done either at the application level by grouping multiple logical entries into single cache entries, or at the backing map level, again by combining multiple entries into a smaller number of larger heap objects. At the application level, it may be possible to combine objects based on parent-child or sibling relationships (basically the same requirements that would apply to using partition affinity). If there is no natural relationship, it may still be possible to combine objects, effectively using a Coherence NamedCache as a "map of maps". This forces the application to first find a collection of objects (by performing a partial hash) and then to look within that collection for the desired object. This is most naturally implemented as a collection of entry processors to avoid pulling unnecessary data back to the client (and also to encapsulate that logic within a service layer). At the backing map level, the NIO storage option keeps keys on heap, and so has limited benefit for this situation. The Elastic Data features of Coherence naturally combine entries into larger heap objects, with the caveat that only data -- and not indexes -- can be stored in Elastic Data.

    Read the article

  • ifconfig can't see USB wireless

    - by Alex
    I have a wifi USB dongle which I have previously used on a Raspberry Pi (this it is what it is target at). I am trying to get it working on an Nvidia Jetson TK1, however I am having some problems. When I run ifconfig I can't see the wifi, only the ethernet and local loopback. iwconfig reports no wireless extensions on all devices. lsusb does find the device: Bus 002 Device 008: ID 148f:5370 Ralink Technology, Corp. RT5370 Wireless Adapter So I am not sure why the network tools can't see it. I have tried logging on with a GUI and opening up the network settings through Unity, but cannot see any wireless devices either. Not sure if this is useful, but output of lsmod: Module Size Used by nvhost_vi 2940 0 How can I enable wireless networking on this computer? Command line approach is preferred, but either is fine. UPDATE I don't have the kernel module rt2800usb anywhere on my system. If I do an apt-file search for rt2800usb it lists a number of packages of the pattern: linux-image-3.13.0-*. Perhaps installing one of these will do the trick, but can anyone tell me if its safe to do so?

    Read the article

  • Tail-recursive implementation of take-while

    - by Giorgio
    I am trying to write a tail-recursive implementation of the function take-while in Scheme (but this exercise can be done in another language as well). My first attempt was (define (take-while p xs) (if (or (null? xs) (not (p (car xs)))) '() (cons (car xs) (take-while p (cdr xs))))) which works correctly but is not tail-recursive. My next attempt was (define (take-while-tr p xs) (let loop ((acc '()) (ys xs)) (if (or (null? ys) (not (p (car ys)))) (reverse acc) (loop (cons (car ys) acc) (cdr ys))))) which is tail recursive but needs a call to reverse as a last step in order to return the result list in the proper order. I cannot come up with a solution that is tail-recursive, does not use reverse, only uses lists as data structure (using a functional data structure like a Haskell's sequence which allows to append elements is not an option), has complexity linear in the size of the prefix, or at least does not have quadratic complexity (thanks to delnan for pointing this out). Is there an alternative solution satisfying all the properties above? My intuition tells me that it is impossible to accumulate the prefix of a list in a tail-recursive fashion while maintaining the original order between the elements (i.e. without the need of using reverse to adjust the result) but I am not able to prove this. Note The solution using reverse satisfies conditions 1, 3, 4.

    Read the article

  • OCZ Agility 3 SSD - Incorrect capacity displayed

    - by Chris
    Just installed a 60GB OCZ Agility 3 SSD, put Windows, and other various applications on there. All working fine. However, when I look at the drive in Windows 7, it says that I have 1.5GB free, but when I select all folders on the drive and view the properties to see the combined file size it says that the total is 28.9GB. So I'm effectively losing half of my capacity!! Any ideas on what this could be? PC Spec: Windows 7 60GB OCZ Agility 3 SSD Thanks, Chris

    Read the article

  • Only one user can connect to Ubuntu samba server

    - by StaticMethod
    I setup a samba server on 12.04 LTS, and it works great for one user but not the others. I am trying to map a network drive from a windows 7 laptop. I can successfully authenticate with one user, but the other two both get "Access is denied" errors. Here is my smb.conf file. [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [share] comment = Ubuntu File Server Share path = /srv/share read only = No create mask = 0755 I know that the service is successfully reading from the /etc/passwd file because if I change the Linux password for the user that works, I have to use the new password when I connect. I changed all the users so they are all members of the same groups (all three users are admins anyway). I only ever have one user connected at a time. Here are the permissions on the shared folder /srv$ ls -l drwxrwxrwx 1 nobody nogroup 16 Feb 22 17:05 share Any ideas?

    Read the article

  • Unix tool for splitting archives

    - by Richo
    I'm dumping an svn repository to a giant USB disk that is formatted FAT due to necessity (treat this as unchangeable). It conks out when you try to create a file larger than 4 gb. I need a tool that I can pipe data to that will create files of arbitrary size that when catted together will be the original file. I can write a tool to do this, but if one already exists I'd rather use it. Cheers EDIT: A second look at the split man page looks like it might work.

    Read the article

  • Splitting Multiple Files in Windows

    - by Justin Boucher
    We have a 21TB LUN full of images that are approx 600K in size in multiple sub folders on the disk. We are trying to split the 21TB LUN into 8 smaller LUNs that are about 2.6TB a piece in order to process the images more effectively. My question is how we can determine what 2.6TB is on the drive? What is the best tool to mark this data so we can copy it to the new smaller LUNs with robocopy or emcopy without overfilling the smaller LUNs? Is there a third-party tool that would be better suited for this task? Thank you in advance for your assistance.

    Read the article

  • Windows 8.1 - Why are there multiple recovery partitions in the system?

    - by Abhiram
    DISKPART> list partition Partition ### Type Size Offset ------------- ---------------- ------- ------- Partition 1 System 500 MB 1024 KB Partition 2 OEM 40 MB 501 MB Partition 3 Reserved 128 MB 541 MB Partition 4 Recovery 490 MB 669 MB Partition 5 Primary 920 GB 1159 MB Partition 6 Recovery 350 MB 921 GB Partition 7 Recovery 9 GB 921 GB Above is the list of partitions on my system that I recently upgraded to Windows 8.1. Why are there multiple recovery partitions (4,6,7)? Shouldn't there be just one recovery partition? And what is the Reserved partition (#3) for?

    Read the article

  • Problems syncing photos and strange effects of uploaded files from other devices

    - by Daniel
    I have a Galaxy Spica (GT-i5700) Android v2.1, rooted with Leshak dev 7 #123. But never mind the root info, the problem would be the same unrooted. The photos from this phone is stored in "sdcard/images", nevertheless the phone also creates a "sdcard/DCIM" but only stores some thumbnails there. Problem nr 1: U1 only reads the DCIM-folder for automatic photo-upload. So photos stored in this phone is not uploaded. If I move photos to "DCIM" folder, U1 recognises the photos and start uploading them. Possible solution: Could there be an option in the settings, to set preferred photo folder? Problem nr 2: Out of 74 pictures, 12 did not get uploaded. Pressing "Retry failed transfers" in Settings does nothing. Pressing the files where status is "Upload failed, tap to retry" only changes the status to "Uploading..." but nothing gets uploaded. If I upload another file to U1, it is uploaded directly without any problem. It has nothing to do with file size, 1,1 MB files has been uploaded fine whilst some failed are 0,8 MB. Problem nr 3: The photos from DCIM are in my case uploaded to a folder called "Pictures - GT-I5700" in U1. If I log in to the homepage and from there upload another photo in "Pictures - GT-I5700", it shows up in U1 on my phone fine. But when I tap it, U1 downloads the photo to "sdcard/U1/Pictures - GT-I5700". If it sync photos from "sdcard/DCIM" to a specific folder, why not also download files to the same folder from which it is synced? After a while of usage, syncing and uploading files from different clients it would be a mishmash of folders and places files are stored and considering that I see no use of U1 at all. Another question: If my SD card in some way breaks down/some folders cannot be read/card temporarly changed and U1 is running, does U1 consider that as files deleted and also delete from the cloud?

    Read the article

  • Identify "Composite Document File"

    - by Steven
    In a folder containing several PowerPoint Presentations and Spreadsheets, I discovered the following file: Name: ppt115.tmp Size: 160 MB Meta: No EXIF or other metadata Type: (as identified by the cygwin / linux program 'file') Composite Document File V2 Document, No summary info Notes: The filename does not correspond to other files in the directory. Neither MS Power Point nor Excel can open the file. MS Word will only attempt to recover text. Please help me identify this file. Is it just a temporary file that I can safely remove?

    Read the article

  • Tweaks to make Cleartype better at high resolutions?

    - by ULTRA_POROV
    Cleartype is great when displaying small text (say 10-16px). However when you display something above 20px it starts looking like mud. Just compare it to Photoshop. Photoshop rendering at small size is not very impressive, too blurry. But if you compare it at 20px, Photoshop wins all the time. Cleartype looks jaggy around the edges, almost like there is no Cleartype at all. Can this be fixed, or is it just the way Cleartype is?

    Read the article

  • Cube chunk via list ToArray()

    - by Christian Frantz
    I've created a list of vertices that I call for each cube made in my array "cubes". When each cube is create, SetUpVertices is called which is a method that stores the 8 vertices of my cube. At the end of my list creation, I create a vertex buffer, and set the data of the list that contains vertices of all 25 cubes to that vertex buffer, effectively creating a "chunk" of cubes. The problem is that Invalid Operation Exception "The array is not the correct size for the amount of data requested." at the line vertices.ToArray(). I don't have an array for this, as the amount of cubes will be changing and arrays aren't dynamic. What could be the cause of this? for (int x = 0; x < 5; x++) { for (int z = 0; z < 5; z++) { SetUpVertices(); cubes.Add(new Cube(device, new Vector3(x, map[x, z], z), color)); } } vertexBuffer = new VertexBuffer(device, typeof(VertexPositionColor), 8, BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionColor>(vertices.ToArray());

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >