Search Results

Search found 18029 results on 722 pages for 'stripe size'.

Page 443/722 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • ubuntu nic card issue

    - by Blainer
    I am trying to install NIC r8168 and it shows everything installed ok. It is a brand new NIC and the lights wont come on when I plug in a ethernet. The NIC is that is not working is eth0. Why does it show the r8168 driver being used by 0? My NIC model number is ST1000SPEX if anyone is wondering. lsmod Module Size Used by r8168 215669 0 ifconfig eth0 Link encap:Ethernet HWaddr 00:0a:cd:1e:0a:4a UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x2000 eth1 Link encap:Ethernet HWaddr 00:19:d1:1d:f6:7a inet addr:192.168.1.83 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::219:d1ff:fe1d:f67a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:551467 errors:0 dropped:0 overruns:0 frame:0 TX packets:145219 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:409744342 (409.7 MB) TX bytes:12233173 (12.2 MB) Interrupt:21 Memory:dfde0000-dfe00000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:280 errors:0 dropped:0 overruns:0 frame:0 TX packets:280 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:22608 (22.6 KB) TX bytes:22608 (22.6 KB) Ubuntu 11.10 x64 Kernel 3.0.0-12-generic

    Read the article

  • Visual annotator for large images

    - by pts
    I have a few hundred images of 30000 x 10000 pixels in size. Each image has lots of text (rendered as pixels) on it. I'd like to translate all text to another language. I speak both languages, and it's fine for me to translate each phrase manually. I need an image editor which can open these images quickly (faster than Inkscape, which needs about 60 seconds to open such an image), lets me zoom and rotate by 90 degrees, lets me erase (i.e. change the color of a selected rectangle to solid white), lets me add text, and lets me save the file as quickly as possible. I'd like to minimize the time I have to wait for the software to load, render and save images. Which is the best program for that on Windows? On Linux?

    Read the article

  • Broken Package on Update Manager

    - by Widy Graycloud
    I dont know what's wrong with my update manager.. It says that the softwares that I installed was broken. Maybe because I force shutdown my laptop, because Ubuntu wont shutdown,showing up desktop wallpaper but not title bar and launcher, but It won't shut down (+that's another bug). I've just update the broken softwares. the size is 60 to 70 MB.. But It doesn't work. Now I cannot update or install any software from Update Manager or Ubuntu Software Center. Can anybody tellme what's wrong? This is what appears when I use Update Manager I use Ubuntu Software Center, and this message appeared I chose repair and when it update the broken softwares using Ubuntu Software Center. It failed. And show up this message. The problem is I can't update or install any program from Ubuntu Software Center and Device Manager anymore. (I closed allprograms include ubuntu software center,and device manager in this case). Some one helpme? I tried to use apt-get install -f in terminal but it shows message like this: E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

    Read the article

  • Identify "Composite Document File"

    - by Steven
    In a folder containing several PowerPoint Presentations and Spreadsheets, I discovered the following file: Name: ppt115.tmp Size: 160 MB Meta: No EXIF or other metadata Type: (as identified by the cygwin / linux program 'file') Composite Document File V2 Document, No summary info Notes: The filename does not correspond to other files in the directory. Neither MS Power Point nor Excel can open the file. MS Word will only attempt to recover text. Please help me identify this file. Is it just a temporary file that I can safely remove?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-26

    - by Bob Rhubart
    Oracle Introduces Free Version of Oracle Application Development Framework Several community bloggers have already written about Oracle Application Development Framework (ADF) Essentials, the free version of Oracle ADF. Here's the official press release. ADF Essentials - Quick Technical Review | Andrejus Baranovskis "This post is just a quick review for ADF Essentials on Glassfish," says Oracle ACE Director Andrejus Baranovskis. "I will do a proper performance test soon to compare ADF performance on 5 ways to think like a cloud architect | ZDNet "Is enterprise architecture ready for the cloud? Is the cloud ready for EA?" Joe McKendrick asks. "Cloud represents a different way of thinking. But we've been here before." Configuring trace file size and number in WebCenter Content 11g | Kyle Hatlestad A quick tip from Oracle Fusion Middleware A-Team member Kyle Hatlestad. Thought for the Day "Elegance is not a dispensable luxury but a factor that decides between success and failure." — Edsger W. Dijkstra (May 11, 1930 – August 6, 2002) Source: SoftwareQuotes.com

    Read the article

  • Mounting LVM2 volume with XFS filesystem

    - by Chris
    Unfortunately I'm not able to access the data on my NAS anymore. I can't figure out why this is the case as I haven't changed anything. So I plugged one of the harddisks in my computer to access the data. What I did: kpartx -a /dev/sdc Now I should be able to access /dev/mapper/vg001-lv001 When trying to mount it I get: sudo mount -t xfs /dev/mapper/vg001-lv001 /home/user/mnt mount: /dev/mapper/vg001-lv001: can't read superblock Now I did a parted -l which gave me Modell: Linux device-mapper (linear) (dm) Festplatte /dev/mapper/vg001-lv001: 498GB Sektorgröße (logisch/physisch): 512B/512B Partitionstabelle: loop Number Begin End Size Filesystem Flags 1 0,00B 498GB 498GB xfs Does anybody have a solution how to recover the data?

    Read the article

  • Only one user can connect to Ubuntu samba server

    - by StaticMethod
    I setup a samba server on 12.04 LTS, and it works great for one user but not the others. I am trying to map a network drive from a windows 7 laptop. I can successfully authenticate with one user, but the other two both get "Access is denied" errors. Here is my smb.conf file. [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [share] comment = Ubuntu File Server Share path = /srv/share read only = No create mask = 0755 I know that the service is successfully reading from the /etc/passwd file because if I change the Linux password for the user that works, I have to use the new password when I connect. I changed all the users so they are all members of the same groups (all three users are admins anyway). I only ever have one user connected at a time. Here are the permissions on the shared folder /srv$ ls -l drwxrwxrwx 1 nobody nogroup 16 Feb 22 17:05 share Any ideas?

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • Emails Generated From Our Linux Server are Blocked By Our Exchange Server (That Has Barracuda)

    - by Scott
    We have our company website hosted on a Linux machine. It is sending mail via postfix. The emails are working and being sent to all email clients like Gmail. However, we are not receiving the emails on our exchange server. When we look at the logs, we see that the connection is being refused, presumably by the exchange server. postfix/qmgr[11865]: DA6D42FF13: from=<[email protected]>, size=3166, nrcpt=1 (queue active) postfix/smtp[12474]: connect to mail.sanitizeddomain.com[XXX.XXX.XXX.XXX]:25: Connection refused postfix/smtp[12474]: DA6D42FF13: to=<[email protected]>, relay=none, delay=172915, delays=172914/0.03/0.07/0, dsn=4.4.1, status=deferred (connect to mail.sanitizeddomain.com[XXX.XXX.XXX.XXX]:25: Connection refused) We do run Barracuda. We cannot telnet from the linux machine to our mail server b/c we get the same message.

    Read the article

  • Capture documents in bitonal, or grayscale then downsample

    - by Jason R. Coombs
    I'm about to embark on a document archival process. I'm going to spend a lot of good money to archive some paper (actually microfiche) to TIFF images. I have a choice of 300-dpi bitonal (2-bit, black/white) or 300-dpi grayscale (8-bit). Cost is the same for either format. Data volume (and thus image size) is not a factor. It seems to me that the grayscale, since scanned at the same resolution as the bitonal, would always contain more information and could always be downsampled to the equivalent bitonal image. Are there any downsides to selecting grayscale, and then later downsampling to bitonal if desired? In other words, is it possible that the scanning software will perform a more accurate (or more legible) representation than a grayscale image converted to bitonal?

    Read the article

  • Text template or tool for documentation of computer configurations

    - by mjustin
    I regularly write and update technical documentation which will be used to set up a new virtual machine, or to have a lookup for system dependencies in networks with around 20-50 (server-side) computers. At the moment I use OpenOffice Writer with text tables, and create one document per intranet domain. To improve this documentation, I would like to collect some examples to identify areas where my documents can be improved, regarding general structure and content, to make it easy to read and use not only for me but also for technical staff, helpdesk etc. Are there simple text templates (for example for OpenOffice Writer) or tools (maybe database-driven) for structured documentation of a computer configuration? Such a template / tool should provide required and optional configuration sections, like 'operating system', 'installed services', 'mapped network drives', 'scheduled tasks', 'remote servers', 'logon user account', 'firewall settings', 'hard disk size' ... It is not so much low-level hardware docs but more infrastructure / integration information in these documents (no BIOS settings, MAC addresses).

    Read the article

  • How do large companies handle software updates for users without administrative rights?

    - by CT
    I just started working for a small-medium size company doing IT support. Maybe 150 or less users. Right now every user has administrative rights to their own machine. This allows them to install updates or whatever else they would like to. I'm tired of getting on user's machines that are bloated with crap they put on themselves. So my first thought would be to take away administrative rights to their computer. This would also have other advantages such as preventing a lot of drive-by malware on the web etc. The problem arises that users are unable to install updates. (Even though I find most ignore these anyway) How do large companies handle software updates on all client machines? EDIT: Windows environment. Most servers are Windows Server 2003 Enterprise. Clients are all Windows. Win XP, Vista, and 7.

    Read the article

  • Architectural approaches to creating a game menu/shell overlay on PC/Linux?

    - by Ghopper21
    I'm am working on a collection of games for a custom digital tabletop installation (similar to Microsoft Surface tables). Each game will be an individual executable that runs full-screen. In addition, there needs to be a menu/shell overlay program running simultaneously. The menu/shell will allow users to pause games, switch to other games, check their game history, etc. Some key requirements of the shell: it intercepts all user input (mainly multitouch) first before passing it on to the currently running game (so that it can, for instance, know to pop-up at a "pause" command); can reveal on arbitrary portions of the screen, with the currently running (but presumably paused) game still showing underneath, ideally with its shape/size being dynamic, to allow for creation of an animated in/out drawer effect over the game. I'm currently looking into different architectural approaches to this problem, including Fraps and DirectX overlays, but I'm sure I'm missing some ways to think about this. What are the main approaches I should be considering? (Note the table is currently being run by Windows PC, but it could potentially be a Linux box instead.)

    Read the article

  • Optimal way to make MySQL backups for fairly large databases (MyISAM / InnoDB)

    - by WinkyWolly
    Currently we have one beefy MySQL database that runs a couple of high traffic Django based websites as well as some e-commerce websites of decent size. As a result we have a fair amount of large databases using both InnoDB and MyISAM tables. Unfortunately we've recently hit a wall due to the amount of traffic so I've setup another master server to help alleviate reads / backups. Now at the moment I simply use mysqldump with a few arguments and it's proven to be fine.. until now. Obviously mysqldump is a slow quick method however I believe we've outgrown its use. I now need a good alternative and have been looking into utilizing Maatkits mk-parallel-dump utility or an LVM snapshot solution. Succinct short version: I have a fairly large MySQL databases I need to backup Current method using mysqldump is inefficient and slow (causing issues) Looking into something such as mk-parallel-dump or LVM snapshots Any recommendations or ideas would be appreciated - since I have to re-do how we're doing things I rather have it done properly / most efficient :).

    Read the article

  • Windows DFS Limitations

    - by Phil
    So far I have seen an article on performance and scalability mainly focusing on how long it takes to add new links. But is there any information about limitations regarding number of files, number of folders, total size, etc? Right now I have a single file server with millions of JPGs (approx 45 TB worth) that are shared on the network through several standard file shares. I plan to create a DFS namespace and replicate all these images to another server for high availability purposes. Will I encounter extra problems with DFS that I'm otherwise not experiencing with plain-jane file shares? Is there a more recommended way to replicate these millions of files and make them available on the network? EDIT: I would experiment on my own and write a blog post about it, but I don't have the hardware for the second server yet. I'd like to collect information before buying 45 TB of hard drive space...

    Read the article

  • Writing to external drive runs out of space prematurely

    - by steve
    I have a USB 2.0, 500 GB HDD. I am writing a bunch of data to it, that I previously recovered from the drive. I have formatted the drive in exFAT, since the drive will be used with Windows and OSX. At first, I tried using Windows explorer to move the files over to the drive (about 160 GiB worth) but after copying about 30% of the data (according to TeraCopy), Windows Explorer reported the drive as out of space, and that it was completely full. WinDirStat only showed the size of data that had been copied over... Where did this extra space go? Why is there a 300+ GiB discrepancy between the usage reported by the files and what Explorer sees?

    Read the article

  • Partition Alignment Confusion

    - by user170757
    I have a new Samsung 840 250GB SSD on the way, and I want to make sure that everything is running optimally after install. I've spent many frustrating hours on the internet trying to understand how I should align the partitions of the SSD when it arrives (and even how to partition everything; my other drive is a 1TB HDD with files already on it). I'd like to know a foolproof way of setting everything up. Now, the only place I could find the erase block size of the 840 is here: http://thessdreview.com/Forums/ssd-beginners-guide-discussion/3630.htm I simply can't understand why such information isn't made freely accessible by manufacturers! But, anyway, this would suggest the EBS is 1536kb, which seems odd to me. It is to my understanding that you should now align by MiB (usually set at 1MiB). I assume that the figure above should actually be 1536k B=1.5MiB? This seems to suggest the partition alignment will be somewhat non-standard. So my question is: How do I align my partitions given this information? Please bear in mind that I have never used linux before; I'm doing my best to get everything set up so that I can begin to learn but am finding this process incredibly opaque and time consuming. If possible, a step by step guide through GParted would be great; at the moment I'm considering an NTFS partition ~20GB for Windows (playing games), an EXT4 ~20GB for ubuntu (for doing everything else) and a shared documents+games partition for everything else in NTFS file format. I'm not going to have any swap partition and use swap files instead.

    Read the article

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • Is it possible to get xRandR to see two separate outputs with the nvidia driver?

    - by rumtscho
    I have two monitors, which I have set up with nvidia-settings in Twinview. The result: When I want to do something in xRandR, it does not function. It doesn't report one output per video card head, but a single output mapped to the combined area of both monitors: rumtscho@bradbury:~$ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 3840 x 1440, current 3840 x 1440, maximum 3840 x 1440 default connected 3840x1440+0+0 0mm x 0mm 3840x1440 50.0* Now I promised somebody to help test a driver. The developer is using an open source driver for Intel video cards, and his driver assumes that there is more than one xRandR output, each mapped to a monitor. So I tried rewriting my xorg.conf to somehow get two outputs to show up, but failed. Googling showed that people faced with the xRandR-nvidia problem either stopped using xRandR and achieved what they needed with nvidia-settings, or changed their driver to nouveau. The first is not going to help in my situation, and I am not willing to give up the proprietary driver, because Compiz won't work without it. So does anybody know a way to get nvidia to actually pass on information on outputs to xRandR?

    Read the article

  • SEO and external sites that serve responsive images (like Re-SRC)

    - by Baumr
    Re-SRC is a tool that allows you to automatically serve responsive images for your website from their cloud servers. It delivers a new image file each time the browser window (viewport) is resized. To use it in your HTML when linking to an image, you would do the following: <img src="http://app.resrc.it//www.your-domain.com/img/img001.jpg"/> Some more background for SEO considerations: As an example, looking at their demo page's code, the src of the Arc de Triomphe photo — when the browser window is resized to be at a tablet-width — shows this particular file at it's widest. It is found under the following URL: http://app4-uk.resrc.it/s=w560,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If the viewport is increased to desktop-width, then a smaller image is served in line with the design; see this URL: http://app4-uk.resrc.it/s=w320,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If I change the viewport to be about half-way between those two, then the image's URL is: http://app4-uk.resrc.it/s=w240,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg In other words, I found that there is a separate file for every 10-pixel increment of the image width. Very cool for saving bandwidth on mobile devices and service responsive/retina images on others, but... Here are two problems I see for SEO: The img on your site, part of your semantic markup, will not be hosted on your site at all, or even a server you control. Any links to these images will pass on "link juice" to Re-SRC's site instead. You are serving a vast array of different image files to different people — some may link to one, others to another size. Then there's the question of what different search engine crawlers will see. Also: There seems to be no fallback option if their servers are down. Do you see any other concerns? Or, perhaps, do you not see those as concerns?

    Read the article

  • SearchServer2008Express Search Webservice

    - by Mike Koerner
    I was working on calling the Search Server 2008 Express search webservice from Powershell.  I kept getting <ResponsePacket xmlns="urn:Microsoft.Search.Response"><Response domain=""><Status>ERROR_NO_RESPONSE</Status><DebugErrorMessage>The search request was unable to connect to the Search Service.</DebugErrorMessage></Response></ResponsePacket>I checked the user authorization, the webservice search status, even the WSDL.  Turns out the URL for the SearchServer2008 search webservice was incorrect.  I was calling $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"and it should have been$URI= "http://ss2008/_vti_bin/search.asmx?WSDL"Here is my sample powershell script:# WSS Documentation http://msdn.microsoft.com/en-us/library/bb862916.aspx$error.clear()#Bad SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"#Good SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/search.asmx?WSDL"$search = New-WebServiceProxy -uri $URI -namespace WSS -class Search -UseDefaultCredential $queryXml = "<QueryPacket Revision='1000'>  <Query >    <SupportedFormats>      <Format revision='1'>urn:Microsoft.Search.Response.Document.Document</Format>    </SupportedFormats>    <Context>      <QueryText language='en-US' type='MSSQLFT'>SELECT Title, Path, Description, Write, Rank, Size FROM Scope() WHERE CONTAINS('Microsoft')</QueryText>      <!--<QueryText language='en-US' type='TEXT'>Microsoft</QueryText> -->    </Context>  </Query></QueryPacket>" $statusResponse = $search.Status()write-host '$statusResponse:'  $statusResponse $GetPortalSearchInfo = $search.GetPortalSearchInfo()write-host '$GetPortalSearchInfo:'  $GetPortalSearchInfo $queryResult = $search.Query($queryXml)write-host '$queryResult:'  $queryResult

    Read the article

  • Install a i386 printer driver into an amd64 distribution or how can I find a good printer based on features?

    - by Yanick Rochon
    Hi, I just bought a Lexmark Interpret S408 all-in-one printer. The box said that it supported Ubuntu 8.04, but I told myself it should work with Lucid... well no. The only driver I have found is for i386 while I have a amd64 image installed; the architecture is incompatible. So, the quesiton is : Is it possible to install that driver anyway, somehow? Or do I need to take that printer back to the store and buy another one? If the latter is the only alternative, I need a printer that has wireless connection capability can do color printing is of good price (less than $200 CAD) Thank you for your answers, help, and tips. ** UPDATE ** The driver was given in the form of deb package (for Debian distributions) and I managed to extract the actual deb package driver out of the install program. I ran sudo dpkg -i --force-all lexmark-inkjet-09-driver-1.5-1.i386.deb and the driver installed, and I was able to print something out. But that pretty much ends there; I cannot access anymore of the printer settings, etc. (i.g. scanner, fax, wifi settings, etc.) I should suffice for now as I'm satisfied with the printer's features (and size, and prince), but if I could have a full-linux-supported printer like that one, I would return this one in exchange for the other.

    Read the article

  • Is it possible to clone system drive in Windows 7?

    - by Ladislav Mrnka
    My current problem is that my Window 7 system drive is unstable. I would like to try to clone this drive to the same type of disk (OCZ Vertex 2 120GB to OCZ Vertex 2 120GB) and replace the system drive with created clone. My installation doesn't have ProgramData and User profiles on the system drive. Later on (after warranty replacement of problematic drive), I would like to copy ProgramData and User profiles to different disk (Samsung SpinPoint 750GB to OCZ Vertex 2 120GB) and use the new disk instead. Note: data have only few GBs so there should not be any problem with the disk size. Is it possible? What is the best way to do that? Is it better to simply reinstall the system from scratch (I would like to avoid it)?

    Read the article

  • Automating repetitive game development tasks

    - by MrDatabase
    Disclaimer: this is an open-ended and kinda "far out" question Over the last few years I've made a few iPhone games. I use very common programs like Xcode and Illustrator to make the games. Lately I've become tired of repeating certain tasks over and over again. Here are some examples: in Xcode: "clean target, build, run" over and over again in Xcode: delete image resources and then import updated image resources (identical names) I'd like to automate these tasks in Xcode. Any ideas? I've done some automation in Photoshop using the "button mode" thing where you record a macro... that's been very useful. Here's the kinda wacky or "far out" part of the question: how can this automation be done via voice commands? (perhaps using a Nuance product or something) Here's an example of what I'd love to do via a few voice commands: Save artwork from illustrator at a user-specified size (@2x versions as well) Delete "someArt.png" and "[email protected]" from Xcode Add the updated versions of someArt.png to Xcode In Xcode: clean target, build, and run I know this question probably seems bizarre... but something like this could make certain things substantially easier for game developers. Edit: wonder if a combination of AppleScript and Nuance might work?

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >