Search Results

Search found 17716 results on 709 pages for 'bad pool header'.

Page 83/709 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Is it a bad practice to have an interface to define constants?

    - by FabianB
    I am writing a set of junit test classes in java. There are several constants, for example strings that I will need in different test classes. I am thinking about an interface that defines them and every test class would implement it. The benefits I see there are: easy access to constants: "MY_CONSTANT" instead of "ThatClass.MY_CONSTANT" each constant defined only once Is this approach rather a good or bad practice? I feel like abusing the concept of interfaces a little bit. You can answer generally about interfaces/constants, but also about unit tests if there is something special about it.

    Read the article

  • What to do if I hate C++ header files?

    - by BlaXpirit
    I was always confused about header files. They are so strange: you include .h file which doesn't include .cpp but .cpp are somehow compiled too. NOTE: I UNDERSTAND EVERYTHING ABOUT THE HEADERS, PLEASE DON'T TELL ME I'M STUPID OR SHOULD USE OTHER LANGUAGE Recently I joined a team project, and of course, both .h and .cpp are used. I understand that this is very important, but I can't live with copy-pasting every function declaration in each of multiple classes we have. How do I handle the 2-file convention efficiently? Are there any tools to help with that, or automatically change one file that looks like example below to .h and .cpp? (specifically for MS VC++ 2010) class A { ... Type f(Type a,Type b) { //implementation here, not in another file! } ... }; Type f(Type a) { //implementation here } ...

    Read the article

  • Is the use of explicit ' == true' comparison always bad? [closed]

    - by Slomojo
    Possible Duplicate: Make a big deal out of == true? I've been looking at a lot of code samples recently, and I keep noticing the use of... if( expression == true ) // do something... and... x = ( expression == true ) ? x : y; I've tended to always use... x = ( expression ) ? x : y; and... if( expression ) // do something... Where == true is implicit (and obvious?) Is this just a habit of mine, and I'm being picky about the explicit use of == true, or is it simply bad practice?

    Read the article

  • Are generic keywords in url bad for SEO? [closed]

    - by user1661479
    Possible Duplicate: Squeezing all the SEO out of a URL as possible Need help with url structure. Let's say I'm a manufacturer of Wire EDM machines. Is it bad for me to put the keywords wire-edm in my url to help try to raise SEO ranking? For example: mywebsite.com/wire-edm/machine/model-xxxx mywebsite.com/wire-edm/customer-service mywebsite.com/wire-edm/contact Or should I leave it as the following because the gains are fairly insignificant and it doesn't help users understand my site structure: mywebsite.com/machine/model-xxxx mywebsite.com/customer-service mywebsite.com/contact I’d like to hear what everyones thoughts are on this and please provide some sources for which method is better.

    Read the article

  • Is there an eCommerce platform made to fit between a global header/footer? [closed]

    - by beta208
    Possible Duplicate: Which Ecommerce Script Should I Use? We've been looking around for a while for an eCommerce platform made to live between the header and footer for integrating in an existing site. We would prefer it not to be paypal buttons, but an actual CMS type platform. Any suggestions? This is not a duplicate, and is a valid question sought out by many across the web. If someone has an answer many people would benefit from it. This is not simply looking for a CMS with X or Y.

    Read the article

  • How to make a section header with an non-rectangular shape without ugly underflow?

    - by mystify
    I made an custom UITableView. Then I made a custom header for sections. It has round corners. But unfortunately, the rows of the section are visible in those round corners when the header floats over them. I could just make a background color so the corners are not transparent. But that is not a solution since my whole table has a background image and the section header can move. Is there any way to get the clipping region for the rows a little bit more downwards? I mean: They should not appear under that section header.

    Read the article

  • Images from remote source - is it possible or is really a bad practice?

    - by user1620696
    I'm building a management system for websites and I had an idea related to image galleries that I'm not sure it's a good approach. Since images might need good deals of space depending on how much images a user uploads an so on, I thought on using cloud services like dropbox, mega and google drive to store images and load then when needed. The obvious problem is that for me this seems a useless solution because it would be slow to download the images from the remote source, making the user experience not so good. Is there any way to save images of a image gallery on remote source without getting the user experience bad because of speed? Or this is really not a good practice?

    Read the article

  • "Never do in code what you can get the SQL server to do well for you" - Is this a recipe for a bad design?

    - by PhonicUK
    It's an idea I've heard repeated in a handful of places. Some more or less acknowledging that once trying to solve a problem purely in SQL exceeds a certain level of complexity you should indeed be handling it in code. The logic behind the idea is that for the large majority of cases, the database engine will do a better job at finding the most efficient way of completing your task than you could in code. Especially when it comes to things like making the results conditional on operations performed on the data. Arguably with modern engines effectively JIT'ing + caching the compiled version of your query it'd make sense on the surface. The question is whether or not leveraging your database engine in this way is inherently bad design practice (and why). The lines become blurred further when all the logic exists inside the database and you're just hitting it via an ORM.

    Read the article

  • Two HTML elements with same id attribute: How bad is it really?

    - by danludwig
    Just browsing the google maps source code. In their header, they have 2 divs with id="search" one contains the other, and also has jstrack="1" attribute. There is a form separating them like so: <div id="search" jstrack="1"> <form action="/maps" id="...rest isn't important"> ... <div id="search">... Since this is google, I'm assuming it's not a mistake. So how bad can it really be to violate this rule? As long as you are careful in your css and dom selection, why not reuse id's like classes? Does anyone do this on purpose, and if so, why?

    Read the article

  • Is it a bad practice to quit a company only to begin as a consultant? [closed]

    - by niwi
    Like the title says; is it a bad practice to quit a company after a few years, only to begin as a consultant for the same 'customer'? I've been lucky enough to come in to a company in the oil business from the beginning, developing software for relatively new and unused technology. Long before I even got this job, I've wanted to start my own consulting firm. Is it morally wrong of me to quit my job after a few years, only to hire myself out as a 'specialist' or consultant on our systems?

    Read the article

  • Domain name similar to an other existing one, bad for SEO?

    - by qqfr2507
    I am in the process of choosing a domain name for a personal project. I have found a very good one (let's say it is "myproject.com") but it is very close to another existing domain name ("smyproject.com"). Only the first letter is different. This website has a very different activity from mine. My question is: is it bad for SEO? When someone will type "myproject" in a search engine, is there a risk that the first result will be "smyproject.com" if this website has better SEO than mine? Thanks for your help!

    Read the article

  • Should I use "return;" after a header()?

    - by Scarface
    Quick question, I noticed that on some of my header directors I was getting some lag while the header processed. Is using return standard after using headers? Also if you use a header on pages you don't want directly accessed, such as processing pages will return; stop that processing even if the page is not directly accessed? IF return is a good idea would it be better to use exit()?

    Read the article

  • bad pool header 0x00000019 in windows 7 home premium when connecting to net followed by BSOD.

    - by shankar
    Hi, I am have random blue screen errors with an error code of bad pool header 0x00000019 whenever I try going online. I use a usb datacard/modem but when I try logging in using a regular dsl/broadband connection, I have the same issue. I had searched the query in windows knowledge base which said it is an issue with windows 7 and have provided a hot fix which they do not gaurentee. My vendor says something is wrong with my ram and has ordered for a new set of ram, but in my opinion if it was a ram related issue, the crashes should have occured even while playing games which are supposed to be ram intensive...If you need the mini dumps I can provide you the same..Kindly revert back..

    Read the article

  • Is it a bad idea to run an asp.net app pool with the same identity as IIS's anon user?

    - by Andrew Bullock
    Subject says it all really, Thinking on security terms, I want to give each site on my server its own user account, so that they can't access each other's data. I also want to use integrated authentication for sql so i dont have any passwords knocking about in connection strings. Is it a bad idea to use the same account for the app pool identity and the anon user account for iis (im interested in answers for both v6 and 7)? Edit: ive seen this post describing how IIS7 allows you to automatically use the same account, but the question of whether its a good idea or not remains ;) If so, why? Thanks

    Read the article

  • IIS 7 503 error, application pool stop crash, defdoc.dll could not be loaded due to a configuration

    - by optician
    Hi All, Currently trying to get iis 7 to work, but every time I request a page, the application pool goes into stopped status. In the event log this is what comes back. The Module DLL 'C:\Windows\System32\inetsrv\defdoc.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a x86 processor architecture. The data field contains the error number. I've already re installed iis, any other ideas, I read that someone fixed this by downloading the dll again, but this seems like an odd solution. Thanks. EDIT I have now replaced the file with one I downloaded off the internet, and now it says The Module DLL 'C:\Windows\System32\inetsrv\protsup.dll' could not be loaded due to a configuration problem. I hope I don't have to get 100's of these.

    Read the article

  • Do all web caches understand the "Cache-Control" HTTP header?

    - by chris_l
    I'd like to avoid the "Expires" header, and use "Cache-Control" only - or maybe the other way around. The headers will account for a significant percentage of my traffic, so I'd prefer not to "use both". AFAIK, the "Cache-Control" header was standardized in HTTP 1.1, but are there still web caches/proxies in use, which don't understand it? Note: This could help answering a part of my stackoverflow (bounty) question

    Read the article

  • How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.?

    - by Rob
    I've backed up some audio files up in 2 places and added ID3 tags into one backup but not the other, since time has passed my own memory has faded on whether the backups are actually the same, but now one has ID3 data and the other doesn't, basic binary compare will fail and inspection will be cumbersome. Is there a tool to compare just the audio data (not the header, ID3) in mp3s, flac files, and other files using header data such as ID3. started a thread on beyond compare here: http://www.scootersoftware.com/vbulletin/showthread.php?t=7413 would consider other comparison software that does this task

    Read the article

  • Do I still need to send the "Expires" header, or can I assume that web caches understand "Cache-Cont

    - by chris_l
    I want to reduce the overhead caused by HTTP headers to a minimum, so I'd like to avoid the "Expires" header, and use "Cache-Control" only - or maybe the other way around (I'm planning to send very short HTTP responses to browsers, so the answer to this question doesn't fully apply here: My headers account for a significant percentage). AFAIK, the "Cache-Control" header was standardized in HTTP 1.1, but are there still web caches/proxies, that don't understand it? Note: This is a sub-question to my stackoverflow (bounty) question

    Read the article

  • Why aren't my old DLL's running with my app pool in 32bit mode?

    - by brokkalen
    I am moving my websites from a server 2003x86 environment to a server 2008x64. the 2008 server is using iis 7.5 and the app pool I am using is configured for 32bit mode. I get an error 'Server object error 'ASP 0177 : 800401f3' Server.createObject failed.' I beleive that it is in the DLL's that all the ASP sites point to. My programmers, as usual, say it isn't code or the DLL's. Am I missing something to make these old DLL's work? By the way these sites are connecting to a SQL 2000 Database.

    Read the article

  • Ways to setup a ZFS pool on a device without possibility to create/manage partitions?

    - by Karl Richter
    I have a NAS where I don't have a possibility to create and manage partitions (maybe I could with some hacks that I don't want to make). What ways to setup multiple ZFS pools with one partition each (for starters - just want to use deduplication) exist? The setup should work with the NAS, i.e. over network (I'd mount the images via NFS or cifs). My ideas and associated issues so far: sparse files mounted over loop device (specifying sparse file directly as ZFS vdev doesn't work, see Can I choose a sparse file as vdev for a zfs pool?): problem that the name/number of the assigned loop device is anything but constant, not sure how increasing the number loop device with kernel parameter affects performance (there has to be a reason to limit it to 8 in the default value, right?)

    Read the article

  • How can one domain route to an always-changing pool of servers?

    - by ryeguy
    I'm sure this is an easy solution, I'm just not too familiar with how DNS works or if that's even related to this problem. If I'm running a web service on amazon ec2, distributed across many instances, how can I make it so a single domain name can be used to access the entire pool of servers, which will be changing from time to time? Since the instances may be present one second but gone the next (and vice versa), I need a way to randomly pick an active member of the cluster to route to. The updates would have to be instantaneous. Is this even possible, with dns caching and all?

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • Ubuntu 12.04 Hp G72 Problem Installing proprietary wireless driver

    - by user69402
    I have a fresh Ubuntu 12.04 installed on HP G72 machine. In order for my wireless to work I need the proprietary driver installed - Broadcom STA wireless driver. Trying to install it from the System Settings gives me the error: "Sorry, installation of this driver failed. Please have a look at the log file for details: /var/log/jockey.log". So far I suspect the error to be caused by the bad "bcmwl-kernel-source" installation. What i tried: 1. remove "bcmwl-kernel-source" 2. install "bcmwl-kernel-source" installation through the terminal ends with "error code (1)". I would greatly appreciate any help Here is everything that the terminal returns: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: bcmwl-kernel-source 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,151 kB of archives. After this operation, 3,514 kB of additional disk space will be used. Selecting previously unselected package bcmwl-kernel-source. (Reading database ... 170331 files and directories currently installed.) Unpacking bcmwl-kernel-source (from .../bcmwl-kernel-source_5.100.82.38+bdcom-0ubuntu6.1_amd64.deb) ... Setting up bcmwl-kernel-source (5.100.82.38+bdcom-0ubuntu6.1) ... Loading new bcmwl-5.100.82.38+bdcom DKMS files... /usr/sbin/dkms: line 467: unset: POST_REMOVE$PRE_BUMLD': not a valid identifier /usr/sbin/dkms: line 467: unset:BUILD_E\CLUWIVE_ARCH': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: modules_conf_arra}': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: $': not a valid identifier /usr/sbin/dkms: line 467: unset:$': not a valid identifier /usr/sbin/dkms: line 467: unset: `$': not a valid identifier /usr/sbin/dkms: line 419: ${!POST_REMOVE$PRE_BUMLD[@]}: bad substitution /usr/sbin/dkms: line 419: ${!BUILD_E\CLUWIVE_ARCH[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution /usr/sbin/dkms: line 419: ${!$[@]}: bad substitution malloc: ../bash/subst.c:3671: assertion botched free: start and end chunk sizes differ Aborting.../tmp/tmp.pEXTnftUfI: line 4: modules_conf_arra}[[@]}]=[[@]}]}: command not found dkms.conf: Error! No 'DEST_MODULE_LOCATION' directive specified for record #0. dkms.conf: Error! Directive 'DEST_MODULE_LOCATION' does not begin with '/kernel', '/updates', or '/extra' in record #0. dkms.conf: Error! No 'PACKAGE_VERSION' directive specified. Error! Bad conf file. File: /usr/src/bcmwl-5.100.82.38+bdcom/dkms.conf does not represent a valid dkms.conf file. dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 8 Errors were encountered while processing: bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • HTTP Error 503. The service is unavailable

    - by user1671639
    I'm struggling to setup the environment in IIS8, I searched a lot but couldn't find a right solution. I checked the error logs, but no idea. C:\Windows\System32\LogFiles\HTTPERR 2013-10-09 09:28:39 192.168.43.205 60172 192.168.43.205 80 HTTP/1.1 GET / 503 2 AppOffline qa.hti.local 2013-10-09 09:28:39 192.168.43.205 60192 192.168.43.205 80 HTTP/1.1 GET /favicon.ico 503 2 AppOffline qa.hti.local Then in Event Viewer: WARNINGS: A listener channel for protocol 'http' in worker process '11188' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '7492' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '9088' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '9964' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '7716' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. I don't understand what the warning means. ERROR: Application pool 'qa.hti.local' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Note: I learned that consecutive 5 failures leads to APP Pool crash, and this can increased. I also tried increasing this but no success. OS: Windows server 2012 IIS Version: 8 Please share your thoughts.

    Read the article

  • Silverlight DataGrid not stretching to accommodate all items in data source?

    - by bplus
    I'm having problems getting a Silverlight DataGrid to stretch to accommodate all the items in it's dataSource. I've got a Grid that contains two DataGrids. I've tried setting height=Auto on the Grid and the DataGrids. I've tried setting HorizontalContentAlignment="Stretch" on the Grid and the DataGrids. The object tag has height="100%" I've set the Height="*" on the RowDefinitions for the Grid Any help would be very much appreciated! Here's the code listing: <UserControl x:Class="TimeSheet.SilverLight.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" mc:Ignorable="d"> <Grid x:Name="LayoutRoot" Height="Auto" ShowGridLines="True" HorizontalAlignment="Stretch" > <Grid.RowDefinitions> <RowDefinition Height="*"/> <RowDefinition Height="*"/> <RowDefinition Height="*"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <local:DataGrid BorderThickness="5" HorizontalContentAlignment="Stretch" AutoGenerateColumns="False" VerticalAlignment="Top" x:Name="NonProjectGrid" Grid.Row="0"> <local:DataGrid.Columns> <local:DataGridTextColumn Header="Activity" Binding="{Binding TaskName}" /> <local:DataGridTextColumn Header="Monday" Binding="{Binding Monday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Tuesday" Binding="{Binding Tuesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Wednesday" Binding="{Binding Wednesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Thursday" Binding="{Binding Thursday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Friday" Binding="{Binding Friday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Saturday" Binding="{Binding Saturday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Sunday" Binding="{Binding Sunday, Mode=TwoWay}" /> </local:DataGrid.Columns> </local:DataGrid> <local:DataGrid BorderThickness="5" HorizontalContentAlignment="Stretch" AutoGenerateColumns="False" VerticalAlignment="Top" x:Name="ProjectGrid" Grid.Row="2"> <local:DataGrid.Columns> <local:DataGridTextColumn Header="Bug Number" Binding="{Binding BugNo}" /> <local:DataGridTextColumn Header="Activity" Binding="{Binding TaskName}" /> <local:DataGridTextColumn Header="Monday" Binding="{Binding Monday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Tuesday" Binding="{Binding Tuesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Wednesday" Binding="{Binding Wednesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Thursday" Binding="{Binding Thursday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Friday" Binding="{Binding Friday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Saturday" Binding="{Binding Saturday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Sunday" Binding="{Binding Sunday, Mode=TwoWay}" /> </local:DataGrid.Columns> </local:DataGrid> <Button Name="AddBugBtn" Width="125" Height="25" Content="Add From Bugzilla" Click="AddBug_Click" Grid.Row="3" HorizontalAlignment="Right"></Button> <Button Name="SaveBtn" Width="125" Height="25" Content="Save" Click="Save_Click" Grid.Row="3" HorizontalAlignment="Left"></Button> </Grid>

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >