Search Results

Search found 14831 results on 594 pages for 'header fields'.

Page 325/594 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • How to reference individual cells in Excel to variable data from records in an external SQL table

    - by user273476
    I have a SQL table containing date oriented financial data eg. multiple daily records with fields for Date, Account code and Value. I want to set up dynamic links (formulas) from cells in an Excel speadsheet to this data so when the spreadsheet is loaded the data is fetched from all the relevant records. The spreadsheet has the Account codes on the x axis and Dates on the y. Each day the SQL table has new data in it for the new day and I want the spreadsheet to reference this new data for the column for the new day. Any ideas? I have seen how you can generally bring in data from a SQL table (in our case using ODBC as it is not MS SQL) but the data is not simply bringing in multiple records as you would a CVS file but specific records in the SQL table referencing to specific cells and columns in the spreadsheet.

    Read the article

  • Why can't mplayer/libdvdcss/whatever play my DVDs?

    - by che
    To put it bluntly, mplayer is unable to correctly play video DVDs. It seems to correctly find the title and everything, but the picture is broken or not displayed at all, with messages like: a52: CRC check failed! a52: error at resampling [mpeg1video @ 0xa8d840]sequence header damaged [mpeg1video @ 0xa8d840]Missing picture start code Now, this all is on amd64 Gentoo Linux system. I believe the problem is not in mplayer itself, since the playback also breaks in VLC or when i copy the VOBs via vobcopy and try to play them afterwards. I use libdvdcss-1.2.10 and libdvdread-4.1.3_p1168 (current stable in Gentoo), and tried previous versions of both libs, but it didn't change a thing. The DVDs I have tried play fine in regular DVD player or on a Windows laptop. I remember the playback used to work about a year ago and I don't know what to try next. Any hints would be welcome.

    Read the article

  • How can I get rid of the long Google results URLs?

    - by Teifi
    google.com is always shielded by our firewall. When I search something at google.com, a result list appears. Then I click the link, the URL changes to a processed url like: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDcQFjAA&url=http%3A%2F%2Fwww.amazon.com%2F&ei=PE_AUMLmFKW9iAfrl4HoCQ&usg=AFQjCNGcA9BfTgNdpb6LfcoG0sjA7hNW6A&cad=rjt Then my browser is blocked because of google.com I guess. The only useful information in that long like processed URL is http%3A%2F%2Fwww.amazon.com(http://www.amazon.com). My quesitons: What's the meaning of that long like processed URL? Is there a way to remove the header google.com/url?sa.. each time I click the search results?

    Read the article

  • Enable mod_deflate per directory level

    - by z1_jabbar
    I am using following code, when i access site it only compress all the jsp inside all the urls path under /abc but it ignores all the js and css files. I want to compress js and css files under all the subfolders in /abc path? How I can do this. Thanks! <LocationMatch "/abc"> <IfModule mod_deflate.c> SetOutputFilter DEFLATE # Don't compress images SetEnvIfNoCase Request_URI \ \.(?:gif|jpe?g|png)$ no-gzip dont-vary #Don't compress PDFs SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary #Don't compress compressed file formats SetEnvIfNoCase Request_URI \.(?:7z|bz|bzip|gz|gzip|ngzip|rar|tgz|zip)$ no-gzip dont-vary <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> </IfModule> </LocationMatch>

    Read the article

  • Data take on with Drupal 6

    - by Robert MacLean
    We are migrating our current intranet to Drupal 6 and there is a lot of data within the current system which can be classified into: List data, general lists of fields. Common use is phone list of the employees phone numbers. Document repository. Just basically a web version of a file share for documents. I can easily get the data + meta infomation out, but how do I bulk upload the two types of data into Drupal, as uploading the hundred of thousands of items manually is just not acceptable.

    Read the article

  • How do I configure namecheap for "arbitrarily-nested" wildcard subdomains?

    - by rabidsnail
    I'm trying to set up something like nyud.net, where any arbitrary chain of subdomains resolves to the same CNAME record (which in my case points to an amazon elastic load balancer). Ex: www.gogle.com.nyud.net:8080 points to one of their cache servers, which looks at the HOST header and returns www.google.com. I'm using namecheap as my dns host. Adding a CNAME record for *.mydomain.com doesn't seem to do anything (nslookup gives NXDOMAIN for all subdomains). What do I have to do to set this up? Do I have to use something fancier than namecheap (like route53)?

    Read the article

  • Xterm is not completely erasing field lines

    - by user26367
    We have a SSH tunnel to a remote unix box from Windows clients using Cygwin. It launches a terminal program from the unix box locally on the Windows box for data input. The xterm window is launched as follows xterm -fn 10x20 -bg DodgerBlue4 -fg white -cr white -ls -geometry 90x30 -e program When a screen goes from read only mode to edit mode, the edit fields have ____. When going back to read only mode, a single pixel artifact is left behind for each field. *readonly* User: *edit* User: ___________ *after edit exit* User: . <- this dot is left behind Any idea what we need to change to fix this?

    Read the article

  • How to I create a table of contents in a Word document that has a mind of it's own?

    - by Howiecamp
    I'm embarrassed to admit that I'm struggling to get a table of contents going in a Word doc that's already been created. I know enough to understand that the TOC is based on the type of the header/style and indentation. My approach so far has been to auto-generate the TOC and then try (unsuccessfully) to fix the problems; perhaps this isn't the best approach in this situation. What's happening is that the TOC is missing half my sections and for others it's adding way too much detail. Again my sense is I have to "fix" individual section headings but I haven't been successful so far.

    Read the article

  • Changing Admin Site URL (actually port) - how?

    - by TomTom
    I have a new install of the band new SharePoint 2010. I use host header identified site collections for everything. By default the admin site is on a random port. I would like to move the admin site to port 80, for the server name. As all sites have coded names (for example "intranet", "projects") this would allow administration via the server name - which is easier as external access does not have to remember the port number. How do I do this? I already changed the default URL, but the site (application) is still wrongly mapped. I dont find anything to change the IIS settings in the admin site. I possibly just miss it - so can anyone point me in the right direction?

    Read the article

  • Email bounce back 550 5.1.1 recipient rejected

    - by Stan
    A client recently switched to Exchange Server / Outlook for their email. Since then emails from my company to any email address at their company bounce back from the System Administrator with this error: Your message did not reach some or all of the intended recipients. Subject: Email Solution Sent: 12/12/2012 11:08 AM The following recipient(s) cannot be reached: '[email protected]' on 12/12/2012 11:08 AM 550 5.1.1 <[email protected]> recipient rejected Looking in the Message Options of the bounce back email shows no data in the Internet Header field. My client's IT guy says we're not being blocked, but I cant think of any other reason the bounce would occur. Any suggestions on what questions to ask or how to fix this would be helpful. I'm using a desktop version of Outlook 2007 and connecting through my ISP. Thanks.

    Read the article

  • Load balancing + NAT issue on BNT GBE 2-7 gear

    - by Clément Game
    Hi guys, I've got troubles configuring an Hardware load-Balancer with NAT functions. I have the following architecture: Internet === VIP (public) LB (private ip) ==== private addressed servers When a connection is initialised from the outside (internet) , the LB correctly forwards the SYN packet to one of the private servers. But when these servers want to reply with a SYN/ACK there is a problem. the initial SYN packet had as ip header : VIP = Private_server_Address But the private servers cannot reach VIP from their side (this is normal since it's nated), and then provide a correct reply. Have you guys any solution to correctly forward the packets to their correct destination ? Note: The load balancer, which is the default gw for the servers, also has a NAT rule for "masquerading" (actually more SNAT than real masquerading) Regards, Clément.

    Read the article

  • IE I-Frame 1px border to the right

    - by Jackie
    Please look at http://www.mymix947.com In the header i have a 1px border to the right of the banner. You will see a black line dividing the banner and listen live button. This is an i-frame and I can't seem to eliminate the line. This seems to only happen with Windows 7 - IE Browser 8.0.7 When my browser is full screen - i dont see it, but if i shrink the browser slightly - the line is there. Any tips would be great! Thanks!

    Read the article

  • nginx errors: upstream timed out (110: Connection timed out)

    - by Sparsh Gupta
    Hi, I have a nginx server with 5 backend servers. We serve around 400-500 requests/second. I have started getting a large number of Upstream Timed out errors (110: Connection timed out) Error string in error.log looks like 2011/01/10 21:59:46 [error] 1153#0: *1699246778 upstream timed out (110: Connection timed out) while reading response header from upstream, client: {IP}, server: {domain}, request: "GET {URL} HTTP/1.1", upstream: "http://{backend_server}:80/{url}", host: "{domain}", referrer: "{referrer}" Any suggestions how to debug such errors. I am unable to find a munin plugin to keep a check on number of upstream errors. Sometime the number of errors per day is way too high and somedays its a more decent 3 digit number. A munin graph would probably help us finding out any pattern or correlation with anything else How can we make the number of such error as ZERO

    Read the article

  • Use Windows 7 offline sync with external usb hd

    - by René
    Yeah, truly the whole question in the header. Is there a way to use Windows 7 offline sync (which we know from network mapped drives) with a external usb hd? When not, are there similar built in tools or good third party tools? My scenario: I want to buy a ultrabook with SSD which is mostly limited in space. So I'm going to put all files to a external HD and only store current projects on the local SSD. Let's say I have to change project. It would be easy just change sync folders and have the second project synced to my hd too. With network mapped drives it's such easy. Paths do not differ if the drive is offline so in most situations you don't take notice if the folder is offline. And you only have to activate offline file for the folders you courrently need for work. So is there a similar solution for usb hard drives?

    Read the article

  • Which method of SQL Server 2005 or 2008 Replication is best for ease of field changes?

    - by Rick
    We need 15 minute warm updates from one SQL Server to another. Log Shipping looks good and appears easy to setup. We are also looking into Transactional Replication. The data only needs to copy one way. We have two main requirements: 1) The destination database needs to be a max 15 minute old copy of the source. It needs to re-try and get up-to-date if a network cable is unplugged for a while. 2) We would really like table (fields added or modified) changes in the source as easy as possible. Thanks in advance for all suggestions.

    Read the article

  • How can I add metadata to NTFS files/folders?

    - by Pwdr
    I want to tag different file types (i.e. .pdf, .epub, .iso, .bin, folders,..) using the same descriptive fields. For example i would like a metadata field "type" which would be "eBook" on pdf- and epub-files, "CD-Image" on iso- and bin-files. I read about Alternate Data Streams (ADS) to make this possible. Does anyone know a good program for Windows 7 to tag different files and search for them? It is important for me, that the metadata is NOT stored in a separate database. I move the files a lot and need to stay flexible (ADSs 'stick' to the files). Any ideas?

    Read the article

  • Automation of software installation - should I ask for text or file?

    - by Denis
    I am preparing a software installation in Windows environment for my application. During installation it asks for Subscriber ID which should be entered into text field. I am wondering if it is a best solution for mass installations. I know that for mass installations IT teams use systems like Microsoft System Center which allow automate deployment. But I do not know much about capabilities of such systems. Can such system automate data entry into the text fields? Will it be better to change installation process and ask not a text but a file which contains Subscriber ID? By the way, I am looking for beta testers for my software. This software let user view Microsoft Project files without having Microsoft Project installed.

    Read the article

  • How to use postfix header_checks with zarafa outgoing mail

    - by olvrlrnz
    I'm using zarafa as MDA with postfix. For privacy reasons I want to filter client internal IP-addresses and stuff like this. To do so I've added the following to master.cf: submission inet n - - - - smtpd [...] -o cleanup_service_name=subcleanup [...] and further down the file: subcleanup unix n - - - 0 cleanup -o header_checks=pcre:/etc/postfix/smtp_header_checks which works perfectly for clients delivering their mail through the submission port. But my zarafa is of course not using the submission port to send mail, hence it doesn't hit the subcleanup routine and outgoing mails contain a very nice X-Mailer: Zarafa-exact_version header which is rather unsatisfying. Is there any way to make zarafa use the subcleanup routine? Any help is much appreciated.

    Read the article

  • TCP 30 small packets per second flood connection with server

    - by Denis Ermolin
    I'm testing connection with flash client and cloud server(boost::asio for software) over TCP connection. My connection with server already is really poor - 120 ms ping in average. I found when i start to send packets with 2 bytes size (without tcp header) with speed 30 packets/s - ping grow to 170-200 average. I think that it's really bad and my bad connection and bad cloud provider is reason for this high ping without any load. What do you think? (I tested my software - it can compute about 50k small packets/s so software is not a problem). I measure my ping through flash client - send packet with timestamp and immediatly send from server to client.

    Read the article

  • Edit exim4 Message-ID for releasing blocked mail by Mailscanner

    - by F12
    Our sysadmin team edits the field Message-ID in exim4 header files (ending with -H) and substitues the first char after "<". e.g: 077I Message-ID: <[email protected] -- 077I Message-ID: <[email protected] I'd like to write a script to release the mails. I changed the part between "<" and "@" in the field Message-ID and substituted a hash value so the Message-ID looks like: 077I Message-ID: <[email protected] Now exim says "format error" in the log and the mail is not released. There was no change except for this one field. Why can't the ID be substituted like that? Does it need to be the exact same length? It's exim4 version 4.69-2ubuntu0.3.

    Read the article

  • What's the deal with NTFS tags in windows 7

    - by polarix
    So back in the days of 'longhorn' there was this WinFS idea which was both cool looking and scary looking. Then it seemed to disappear, but we were told that many of the concepts would be rolled into Vista. Then maybe Win7. Anyway, nowadays if you look at a win7 Explorer window, you can have columns that have a lot of tag-based info about a file (right click on column header-more...), including one called "tags". Is this something in NTFS that can be modified per-file somehow? Is its GUI hiding, or is this something that's infinitely-delayed, or is it just a figment of my imagination? Sure would be nice to be able to get around the NTFS path 256 character limit for searches, and to filter file folders per Excel 2007.

    Read the article

  • mod_proxy failing as forward proxy in simple configuration

    - by Stabledog
    (On Mac OS X 10.6, Apache 2.2.11) Following the oft-repeated googled advice, I've set up mod_proxy on my Mac to act as a forward proxy for http requests. My httpd.conf contains this: <IfModule mod_proxy> ProxyRequests On ProxyVia On <Proxy *> Allow from all </Proxy> (Yes, I realize that's not ideal, but I'm behind a firewall trying to figure out why the thing doesn't work at all) So, when I point my browser's proxy settings to the local server (ip_address:80), here's what happens: I browse to http://www.cnn.com I see via sniffer that this is sent to Apache on the Mac Apache responds with its default home page ("It works!" is all this page says) So... Apache is not doing as expected -- it is not forwarding my browser's request out onto the Internet to cnn. Nothing in the logfile indicates an error or problem, and Apache returns a 200 header to the browser. Clearly there is some very basic configuration step I'm not understanding... but what?

    Read the article

  • What does *:* in netstat output stands for?

    - by chello
    While executing the command /usr/sbin/lsof -l -i -P -n under root user, I am getting this output. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... httpd 9164 70 3u IPv4 0x2f70270 0t0 TCP 127.0.0.1:9010 (LISTEN) httpd 9164 70 4u IPv6 0x25af4bc 0t0 TCP *:80 (LISTEN) httpd 9164 70 5u IPv4 0x3149e64 0t0 TCP *:* (CLOSED) httpd 9180 70 3u IPv4 0x2f70270 0t0 TCP 127.0.0.1:9010 (LISTEN) httpd 9180 70 4u IPv6 0x25af4bc 0t0 TCP *:80 (LISTEN) httpd 9180 70 5u IPv4 0x3149e64 0t0 TCP *:* (CLOSED) Please let me know what does *:* stands for? I am interested to know both the ipaddress and port fields. Also what does (CLOSED) mean here?

    Read the article

  • Apache mod_header rule to change all cookies to secure

    - by Supowski
    I would like to change all cookies to be secure and http-only. I works fine for one cookie, but doesn't work when multiple cookies are set in response. Apache mod_header rule should change cookies from: Set-Cookie cookie1=value; Path=/somePath Set-Cookie cookie2=value; Path=/somePath to Set-Cookie cookie1=value; Path=/somePath; Secure; Http-Only Set-Cookie cookie2=value; Path=/somePath; Secure; Http-Only I use mod_headers for it with following rule: Header edit Set-Cookie ^(.*)$ $1;Secure;HttpOnly It works fine when only one cookie is set, but if there is more than one, it just removes all the following and they are not set at all. Any help how to write mod_headers rule for multiple values? or the problem is in something else?

    Read the article

  • How to make sure clients update their browser cache when my website is updated?

    - by user64204
    I am using the HTTP 1.1 Cache-Control header to implement client-side caching. Since I update my website only once a month I would like the CSS and JS files to be cached for 30 days with Cache-Control: max-age=2592000. The problem is that the 30-day period defined by Cache-Control doesn't coincide with the website update cycle, it starts from the moment the users visit the site and ends 30 days later, which means an update could occur in the meantime and users would be running with outdated content for a while, which could break the rendering of the website if for instance the HTML and CSS no longer match. How can I perform client-side caching of content for periods of several days but somehow get users to refresh their CSS/JS files after the website has been updated? One solution I could think of is that if website updates can be schedule, the max-age returned by the server could be decreased every day accordingly so that no matter when people visit the website, the end of caching period would coincide with the update of the website, but changing the server configuration every day goes against one of my sysadmin principles (once it's running, don't touch it).

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >