Search Results

Search found 10583 results on 424 pages for 'dev groups'.

Page 342/424 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • Background image not getting vertically stretched in Chrome.

    - by KPL
    Hi all, The CSS - #header { overflow: hidden; background: url(images/header-bg.png) top repeat-x #FFFFFF; position: relative; border: none; display: block; height: 125px; width:100%; } The HTML - <div id="header"> <a href="http://localhost/" title="Dev" id="logo"><img src="images/logo.png" alt="" /></a> </div> This works good in Firefox - But not in Chrome :( - The image isn't being stretched vertically in Chrome. Help! Just a note, I'm on Linux.

    Read the article

  • IO redirect engine with metadata

    - by hawk.hsieh
    Is there any C library or tool to redirect IO and be able to configured by a metadata. And provide a dynamic link library to perform custom process for feeding data into next IO. For example, network video recorder: record video: socket do_something() file preview video: socket do_something() PCI device http service: download file: socket do_something(http) file socket post file: socket do_something(http) file serial control: monitor device: uart do_something(custom protocol) popen("zip") socket I know the unix-like OS has IO redirect feature and integrate all application you want. Even socket IO you can use /dev/tcp or implement a process to redirect to stdout. But this is process based , the process's foot print is big , IPC is heavy. Therefore, I am looking for something to redirect IO in a process and the data redirect between IO is configurable with a metadata (XML,jason or others).

    Read the article

  • How do I read and traverse inodes

    - by Eric Fossum
    I've opened the super-block and group descriptor in an EXT2 filesystem, but I don't know how to read for instance the root directory or files in it... Here's some of what i got fd=open("/dev/sdb2", O_RDONLY); lseek(fd, SuperSize, SEEK_SET); read(fd, &super_block, SuperSize); lseek(fd, 4096, SEEK_SET); read(fd, &groupDesc, DescriptSize); but this next part doesn't seem to work... lseek(fd, super_block.s_log_block_size*groupDesc.bg_inode_table, SEEK_SET); lseek(fd, InodeSize*(EXT2_ROOT_INO-1), SEEK_CUR); read(fd, &root, InodeSize);

    Read the article

  • How to keep Hibernate mapping use under control as requirements grow

    - by David Plumpton
    I've worked on a number of Java web apps where persistence is via Hibernate, and we start off with some central class (e.g. an insurance application) without any time being spent considering how to break things up into manageable chunks. Over time as features are added we add more mappings (rates, clients, addresses, etc.) and then amount of time spent saving and loading an insurance object and everything it connects to grows. In particular you get close to a go-live date and performance testing with larger amounts of data in each table is starting to demonstrate that it's all too slow. Obviously there are a number of ways that we could attempt to partition things up, e.g. map only the client classes for the client CRUD screens, etc., which would have been better to get in place earlier rather than trying to work it in at the end of the dev cycle. I'm just wondering if there are recommendations about ways to handle/mitigate this.

    Read the article

  • How to transfer a post request in curl into a ruby script?

    - by 0x90
    I have this post request: curl -i -X POST \ -H "Accept:application/json" \ -H "content-type:application/x-www-form-urlencoded" \ -d "disambiguator=Document&confidence=-1&support=-1&text=President%20Obama%20called%20Wednesday%20on%20Congress%20to%20extend%20a%20tax%20break%20for%20students%20included%20in%20last%20year%27s%20economic%20stimulus%20package" \ http://spotlight.dbpedia.org/dev/rest/annotate/ How can I write it in ruby? I tried this as Kyle told me: require 'rubygems' require 'net/http' require 'uri' uri = URI.parse('http://spotlight.dbpedia.org/rest/annotate') http = Net::HTTP.new(uri.host, uri.port) request = Net::HTTP::Post.new(uri.request_uri) request.set_form_data({ "disambiguator" => "Document", "confidence" => "0.3", "support" => "0", "text" => "President Obama called Wednesday on Congress to extend a tax break for students included in last year's economic stimulus package" }) request.add_field("Accept", "application/json") request.add_field("Content-Type", "application/x-www-form-urlencoded") response = http.request(request) puts response.inspect but got this error: #<Net::HTTPInternalServerError 500 Internal Error readbody=true>

    Read the article

  • How do I tell which account is trying to access an ASP.NET web service?

    - by Andrew Lewis
    I'm getting a 401 (access denied) calling a method on an internal web service. I'm calling it from an ASP.NET page on our company intranet. I've checked all the configuration and it should be using integrated security with an account that has access to that service, but I'm trying to figure out how to confirm which account it's connecting under. Unfortunately I can't debug the code on the production network. In our dev environment everything is working fine. I know there has to be a difference in the settings, but I'm at a loss with where to start. Any recommendations?

    Read the article

  • Getting Started with Fluent NHibernate

    - by Andy
    I'm trying to get into using Fluent NHibernate, and I have a couple questions. I'm finding the documentation to be lacking. I understand that Fluent NHibernate / NHibernate allows you to auto-generate a database schema. Do people usually only do this for Test/Dev databases? Or is that OK to do for a production database? If it's ok for production, how do you make sure that you're not blowing away production data every time you run your app? Once the database schema is already created, and you have production data, when new tables/columns/etc. need to be added to the Test and/or Production database, do people allow NHibernate to do this, or should this be done manually? Is there any REALLY GOOD documentation on Fluent NHibernate? (Please don't point me to the wiki because in following along with the "Your first project" code building it myself, I was getting run-time errors because they forget to tell you to add a reference. Not cool.) Thanks, Andy

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • How do I set a variable inside a bash for loop?

    - by Isaac Moore
    I need to set a variable inside of a bash for loop, which for some reason, is not working for me. Here is an excerpt of my script: function unlockBoxAll { appdir=$(grep -i "CutTheRope.app" /tmp/App_list.tmp) for lvl in {0..24} key="UNLOCKED_$box_$lvl" plutil -key "$key" -value "1" "$appdir/../Library/Preferences/com.chillingo.cuttherope.plist" 2>&1> /dev/null successCheck=$(plutil -key "$key" "$appdir/../Library/Preferences/com.chillingo.cuttherope.plist") if [ "$successCheck" -eq "1" ]; then echo "Success! " else echo "Failed: Key is $successCheck " fi done } As you can see, I try to write to a variable inside the loop with: key="UNLOCKED_$box_$lvl" But when I do that, I get this: /usr/bin/cutTheRope.sh: line 23: syntax error near unexpected token `key="UNLOCKED_$box_$lvl"' /usr/bin/cutTheRope.sh: line 23: `key="UNLOCKED_$box_$lvl"' What am I not doing right? Is there another way to do this? Please help, thanks.

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • Strange CSS Positioning issue with a fader on a responsive site

    - by EPICWebDesign
    I'm developing a prototype responsive wordpress theme version of my homepage. I'm running into issues with the fader on the homepage. http://dev.epicwebdesign.ca/epicblog/ When the image switches, it uses position:absolute to overlay the pictures, then goes back to position:relative. I need to make the absolute be relative to the bottom left corner of the menu instead of the top left corner of #wrap so it doesn't overlap. I tried putting it in a container div like this: http://wiki.orbeon.com/forms/doc/contributor-guide/browser#TOC-Absolutely-positioned-box-inside-a-box-with-overflow:-auto-or-hidden but that doesn't seem to work. Any ideas? There are major CSS compatibility issues in IE, try it in chrome. It should look similar to http://epicwebdesign.ca. When the browser window is shrunk horizontially, the whole theme compensates.

    Read the article

  • What is ExtJS philosophy? Single page application?

    - by stach
    I need to write my next project using ExtJs. It's a nice Javascript lib but I don't fully understand the idea behind it. Take the docs page for example: http://www.extjs.com/deploy/dev/docs/ Am I supposed to write my web applications with extjs like that? One page that should never be refreshed, and everything getting done by AJAX? How do you debug such applications if getting to the right place may take a lot of 'clicking' and working with it. You cannot fix the bug and hit refresh in the browser to see the results. Any suggestions?

    Read the article

  • How to log POSTed forms submissions?

    - by justSteve
    Back in the ASP classic days when i needed to write out the name/value pairs of forms submitted by POST i thru this loop into the page: on error resume next for each x in Request.Form Response.AppendToLog x & "=" & Request(x) next It threw all the form fields and values into the log just as GETs are. Does IIS7 .net give me any better method? (this is for the dev/testing portion of the project i don't have any concern about the space or cycles used to accomplish this). thx

    Read the article

  • Writing iPhone apps on linux - What tools do you need

    - by Sia.G
    Hello. I wanted to know if it's possible to WRITE and COMPILE/TEST iPhone apps on a linux platform. I've been on google for a couple of days now, and people either talk about "Mac OS X only!", or "Develop jailbroke apps on Linux". My dev partner has a mac and has a certificate to sign the apps. I don't have a mac, but I will be doing most of the development. So what I want to do is simply develop/test the app in linux, and when it's finished, simply hand over the code to him, who will then compile the finalized app and sign it ready for submission to the app store. Could anyone tell me what linux tools I would need to accomplish this?

    Read the article

  • Dynamically changing validations in Rails

    - by user94154
    I have a model with a validation. At runtime, I'd like to change a value of the validation. For example: in the model bid.rb: class Bid ActiveRecord::Base @foo = Foo.find(1) validates_inclusion_of :amt, :in => [email protected], :message => "must be between 1 and #{@foo.bar}" end and in the application_controller (pseudocode): if today == 'wednesday' Foo.update(1, :bar => 10) else Foo.update(1, :bar => 5) end However, this setup isn't working. The "foo" attribute never updates. It seems that the validation code is set only when the dev server starts and then doesn't change.

    Read the article

  • Styling a list as tabs with a background overflowing into content

    - by Litso
    I couldn't think of any better way to name this question, but I'll explain. I have a mediawiki website with a background pattern (like parchment) behind the articles. At the top of each article I want to have tabs like wikipedia does (with page | talk | edit etc links). The problem is, the tabs should seamlessly fit with the article's background and I can't figure out if this is actually possible. The way I was trying to do it was positioning the list inside the actual content div and give the <li items a transparent background, but as far as I can see there's no way to color the rest of the <ul's background black without affecting the <li's in there. Anyone have an idea? (example url: http://dev.mansonwiki.com/wiki/Sandbox )

    Read the article

  • MS DPM 2007: Testing the Recovery for a Production Domain

    - by NewToDPM
    Hi everybody! MS DPM 2007 is a new technology in my company, and so am I to the product. We have a classic Microsoft domain with two DCs, Exchange 2007 and a couple Web/MS SQL servers. I have deployed DPM one month ago on the domain, and after fixing the various issues I got with the replicas inconsistence and adapting the schedule and retention range to the server storage pool size, I can say the backup system is working correctly (no errors) as of today. However, there is one problem: we did not attempt to restore from the backups yet, which is a big no-no of course. I'm not sure about the way I should handle this, my main concern being Exchange and the System State of the DCs. From my understanding, DPM can only protect AND restore data on a server which is part of the same domain as the backup server. If I restore the System State (containing Active Directory) and the Exchange Storage Groups on a testing server, I am afraid it would completely disturb the domain functioning (for example, having two primary DCs on the domain). I am thinking about building a second DPM server on a testing separate domain which would mirror the replicas and then restore it on testing servers from this new domain. Is it the right way to handle the data recovery testing? How did you do on your domain when you first deployed DPM? I'd be grateful for any link/documentation or advice. Thank you in advance for your help! EDIT: Two options seem possible so far: i. Create another DC/Exchange server in the alternate location; ii. Create a separate domain in the alternate location and setup a trust between this domain and the production one. The option i is certainly the best but implies setting up a secondary Exchange server, with a dedicated public IP address so that if Exchange #1 dies, we can still send emails with Exchange #2. I don't know how complex this can be and would need to discuss it with my colleagues. The option ii would only fit the testing purposes. My only question regarding this is: if my production and DPM servers are part of domain A, and there is a trust between domains A and B, can I restore a domain A content to any domain B server?

    Read the article

  • How send html mail using linux command line

    - by Diesel Draft
    Hi, I need send mail with html format. I have only linux comand line and command "mail". Currently have used: echo "To: [email protected]" > /var/www/report.csv echo "Subject: Subject" >> /var/www/report.csv echo "Content-Type: text/html; charset=\"us-ascii\"" >> /var/www/report.csv echo "<html>" >> /var/www/report.csv mysql -u ***** -p***** -H -e "select * from users LIMIT 20" dev >> /var/www/report.csv echo "</html>" >> /var/www/report.csv mail -s "Built notification" [email protected] < /var/www/report.csv But in my mail-agent i get only plain/text.

    Read the article

  • need help understanding moving up using relative path...

    - by Joel
    I'm not sure why this isn't working, so I must not be understanding things correctly. I'm putting a working live site onto my localhost for dev work. so my site can be seen at the url: example.com or at: localhost/example.com OK. I Have a page at example.com/video/pageone.php On that page, I'm linking to a header by navigating to: <?php include '/home/myserver/public_html/includes/website/website-header.php'; ?> For some reason, This will not work (on the live site): <?php include 'http://www.example.com/includes/website/website-header.php'; ?> Can anyone tell me why 1) The above http address will not work, and 2) how can I make this work in localhost? Thanks!

    Read the article

  • How should open source libraries be used on Windows?

    - by Jason Owen
    There are many open-source libraries that can be compiled with Visual Studio. I'm porting a program from Linux to Windows, but it depends on a number of libraries. I don't know what the best practices regarding libraries are on Windows. On Linux, these libraries are typically part of the distribution. To use sqlite on Debian, for example, you need only to install libsqlite3-dev and the include files and libraries (both static and dynamic) are automatically installed and available to your program. If you need a different version than your distribution supplies, you can compile it in your home directory, install it to ~/include and ~/lib, and set the appropriate environment variables so that your compiler includes those directories in its search path. What is the best way to use libraries that are distributed as source on Windows? If I link dynamically rather than statically, is there an easy way to copy required DLLs into the output directory to ease redistribution (assuming license requirements are met)?

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • Costs and Scope in developing a typical iphone application

    - by ali
    Iam new to iphone development and have been tasked to development a fairly simple iphone application. It would basically show listings of information eg accommodations, restaurants...around 8-9 different types. Drilling on one would show the details of it. These are dynamically sourced from a db (through an xml feed) that powers an existing website. Also users should have ability to save favourites and also an interactive google map showing locations of these places. Just would like to know how long would such an iphone application take to develop and what would it costs. As iam new to iphone dev, i do not know how big the scope is, any complications to anticipate, scope creep issues, and how much to charge. Want to give a reasonable estimate so that i dont overcharge.

    Read the article

  • ASP.Net: HTTP 400 Bad Request error when trying to process http://localhost:5957/http://yahoo.com

    - by mat3
    I'm trying to create something similar to the diggbar : http://digg.com/http://cnn.com I'm using Visual Studio 2010 and Asp Development server. However, I can't get the ASP dev server to handle the request because it contains "http:" in the path. I've tried to create an HTTPModule to rewrite the URL in the BeginRequest , but the event handler doesn't get called when the url is http://localhost:5957/http://yahoo.com. The event handler does get called if the url is http://localhost:5957/http/yahoo.com To summarize http://localhost:5957/http/yahoo.com works http://localhost:5957/http//yahoo.com does not work http://localhost:5957/http://yahoo.com does not work http://localhost:5957/http:/yahoo.com does not work Any ideas?

    Read the article

  • Get local network interface addresses using only proc?

    - by Matt Joiner
    How can I obtain the (IPv4) addresses for all network interfaces using only proc? After some extensive investigation I've discovered the following: ifconfig makes use of SIOCGIFADDR, which requires open sockets and advance knowledge of all the interface names. It also isn't documented in any manual pages on Linux. proc contains /proc/net/dev, but this is a list of interface statistics. proc contains /proc/net/if_inet6, which is exactly what I need but for IPv6. Generally interfaces are easy to find in proc, but actual addresses are very rarely used except where explicitly part of some connection. There's a system call called getifaddrs, which is very much a "magical" function you'd expect to see in Windows. It's also implemented on BSD. However it's not very text-oriented, which makes it difficult to use from non-C languages.

    Read the article

  • MongoDB and datasets that don't fit in RAM no matter how hard you shove

    - by sysadmin1138
    This is very system dependent, but chances are near certain we'll scale past some arbitrary cliff and get into Real Trouble. I'm curious what kind of rules-of-thumb exist for a good RAM to Disk-space ratio. We're planning our next round of systems, and need to make some choices regarding RAM, SSDs, and how much of each the new nodes will get. But now for some performance details! During normal workflow of a single project-run, MongoDB is hit with a very high percentage of writes (70-80%). Once the second stage of the processing pipeline hits, it's extremely high read as it needs to deduplicate records identified in the first half of processing. This is the workflow for which "keep your working set in RAM" is made for, and we're designing around that assumption. The entire dataset is continually hit with random queries from end-user derived sources; though the frequency is irregular, the size is usually pretty small (groups of 10 documents). Since this is user-facing, the replies need to be under the "bored-now" threshold of 3 seconds. This access pattern is much less likely to be in cache, so will be very likely to incur disk hits. A secondary processing workflow is high read of previous processing runs that may be days, weeks, or even months old, and is run infrequently but still needs to be zippy. Up to 100% of the documents in the previous processing run will be accessed. No amount of cache-warming can help with this, I suspect. Finished document sizes vary widely, but the median size is about 8K. The high-read portion of the normal project processing strongly suggests the use of Replicas to help distribute the Read traffic. I have read elsewhere that a 1:10 RAM-GB to HD-GB is a good rule-of-thumb for slow disks, As we are seriously considering using much faster SSDs, I'd like to know if there is a similar rule of thumb for fast disks. I know we're using Mongo in a way where cache-everything really isn't going to fly, which is why I'm looking at ways to engineer a system that can survive such usage. The entire dataset will likely be most of a TB within half a year and keep growing.

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >