Search Results

Search found 29949 results on 1198 pages for 'large scale website'.

Page 116/1198 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Copying Tables from a Website

    - by amemus
    I have difficulty making an Excel-readable file from a table on a Website. The problems very specific to my question are: I have to use IE 7 to access the site. Excel is installed in another computer. The site does not let me view the HTML of the table. Normally, I would simply select the table I want and drag and drop it to Excel. Or, I would view the page source and copy the HTML data. Both do not work in this case. Is there any handy tool out there?

    Read the article

  • When does a cached website refresh?

    - by user142485
    If i go to example.com [placeholder for different website] it creates the numerous cache entries in the browser for different items on the page and two entries for example.com. One of which expires in 1.5hrs and the other has 'No expiration time.' What I am wondering is: when does the browser display the cached page and when does it get the newest version from the server (when re-visiting the site)? What do the two different expiration times for the top level domain mean? A web page I went to was temporarily redirecting to a different page. After it was back up, I was still getting redirected to the temp page even though the correct page was up. Would this have eventually resolved itself based on expiration times or does the cache need to be cleared?

    Read the article

  • Emulate, debug BlackBerry mobile website.

    - by Pennf0lio
    I'm developing a website that needs to be compatible for mobile phones (esp. BlackBerry). How can I debug it's CSS? I already installed the Default User Agent plug-in for Firefox problem is I just don't know how to add the BlackBerry agent. Next problem is how can I properly emulate the mobile site? without the actually having the device. Thanks! I'm using Windows 7, Firefox + Firebug + Web Developer Toolbar. I know XHTML, CSS and WordPress

    Read the article

  • Website ocasionally does not load on first click

    - by tfe
    Today I noticed that my website hosted on a virtual server ocasionally does not load on first click. I click on some link, browser starts loading page but nothing loads and does not not appear any error message (like "connection reset by peer etc). Nothing. When I click the same link again, page loads immediately. The same situation on 2 computers in different browsers. It happends not always, maybe on each 20... or 30 click. Sites from other servers load without this problem. Any ideas what can cause this problem?

    Read the article

  • Allow access only to one website

    - by Alex
    Hey. I'd like to allow access on a computer connected directly to the internet to one website ONLY. The solution of IE's "Content advisor" or firefox's "FoxFilter" isn't good enough because it actually downloads the data and just don't display it. I want to block the traffic before the requests are sent. How is it possible? Thanks. Edit: OS is windows xp. The browser can be firefox, iexplorer, chrome... It doesn't matter. The computer is connected directly to the modem.

    Read the article

  • Authentication through mod_auth_kerb should provide website with no user if no TGT provided

    - by loomi
    Users are authenticated by mod_auth_kerb which works great. Therefore I need to set Require valid-user If there is no valid user Apache fails with an 401 Authorization Required. I would like Apache to deliver the website anyway but not providing a remote_user to the underlying script. This is related to How to tell mod_auth_kerb to do its job despite no "require valid-user". But with the important difference that on a whole subdirectory on every url a kerberos negotation should be initiated, and if it fails it should deliver the content anyway.

    Read the article

  • Hosting website when port 80 is taken?

    - by cinqoTimo
    A few months ago, we purchased an R-HUB unit to replace WebEx for remote support. The device operates through port 80, ehich doesn't appear to be configurable. I know in IIS, you can specify a port besides port 80, but the problem is in the port forwarding. On our router, we have to map an incoming port to the forward port which then directs traffic to the node (webserver). However, the incoming port for both the webserver and the R-HUB is 80 - and the server seems to be getting confused as I can only get to the R-HUB, not the website. How can I expose both devices? Host header headers? DNS config?

    Read the article

  • hosting website on a private network

    - by razor
    i'm currently running a website off 3 linux servers. I'd like to setup a private network and only allow port 80 traffic to one of the servers. I'd also like to setup a vpn so only I can access the servers via ssh or any port for developing/debugging. How hard is this to setup and what do I need to get? Do enterprise/commercial routers have vpn functionality built in? how do I handle DNS? eg- www.mydomain.com would need to point to the router, which forwards traffic to the webserver. Do I set the A record to the router, and somehow tell the router which server to send the http request to? And how would I make server1.mydomain.com resolve to server1 within the private network (without editing host files)? Would I need to run my own DNS (eg- powerdns?) to do this?

    Read the article

  • Best way to administer a website remotely

    - by Mark Szymanski
    I have a Windows computer running an intranet website with IIS and I was wondering what the best way was to administer it from another computer, in this case, a Mac. What I want to do: Be able to edit pages from my Mac. VNC into the server because it is 'headless'. (Already have this set up) My current file syncing setup: I have Dropbox setup to sync files between the computers and then use PureSync to sync the files in the Dropbox folder into the wwwroot folder. Is there a better (faster/easier) way I could do all this? Thanks in advance!

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • How to save a large nhibernate collection without causing OutOfMemoryException

    - by Michael Hedgpeth
    How do I save a large collection with NHibernate which has elements that surpass the amount of memory allowed for the process? I am trying to save a Video object with nhibernate which has a large number of Screenshots (see below for code). Each Screenshot contains a byte[], so after nhibernate tries to save 10,000 or so records at once, an OutOfMemoryException is thrown. Normally I would try to break up the save and flush the session after every 500 or so records, but in this case, I need to save the collection because it automatically saves the SortOrder and VideoId for me (without the Screenshot having to know that it was a part of a Video). What is the best approach given my situation? Is there a way to break up this save without forcing the Screenshot to have knowledge of its parent Video? For your reference, here is the code from the simple sample I created: public class Video { public long Id { get; set; } public string Name { get; set; } public Video() { Screenshots = new ArrayList(); } public IList Screenshots { get; set; } } public class Screenshot { public long Id { get; set; } public byte[] Data { get; set; } } And mappings: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" default-access="property"> <class name="Screenshot" lazy="false"> <id name="Id" type="Int64"> <generator class="hilo"/> </id> <property name="Data" column="Data" type="BinaryBlob" length="2147483647" not-null="true" /> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" > <class name="Video" lazy="false" table="Video" discriminator-value="0" abstract="true"> <id name="Id" type="Int64" access="property"> <generator class="hilo"/> </id> <property name="Name" /> <list name="Screenshots" cascade="all-delete-orphan" lazy="false"> <key column="VideoId" /> <index column="SortOrder" /> <one-to-many class="Screenshot" /> </list> </class> </hibernate-mapping> When I try to save a Video with 10000 screenshots, it throws an OutOfMemoryException. Here is the code I'm using: using (var session = CreateSession()) { Video video = new Video(); for (int i = 0; i < 10000; i++) { video.Screenshots.Add(new Screenshot() {Data = camera.TakeScreenshot(resolution)}); } session.SaveOrUpdate(video); }

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • A view interface for large object/array dumps

    - by user685107
    I want to embed in a page a detailed structure report of my model objects, like print_r() or var_export() produce (now I’m doing this with running var_export() on get_object_vars()). But what I actually want to see is only some properties (in most cases), but at this moment I have to use Ctrl+F and seek the variable I want, instead of just staring at it right after the page completes loading. So I’m embedding buttons to show/hide large arrays etc. but thought: ‘What if there already is the thing I do right now?’ So is there? Update: What would your ideal interface look like? First of all, dumped models fit in the first screen. All the properties can be seen at the first look at the screen (there are not many of them, around 10 per each, three models total, so it is possible). Small arrays can be shown unrolled too. Let the size of the array to count it as ‘small’ be definable. Ideally, the user can see values of the properties without doing any click, scrolling the screen or typing something. There must be some improvements to representing the values, say, if an array is empty, show array ‘My_big_array’ is empty and if a boolean variable starting with is_, has_, had_ has a false as the value, make the variable (let us take is_available for example) shown as is_NOT_available in red, and if it has true as the value, show is_available in green. Without any value shown. The same goes for defined constants. That would be ideal. I want to make focus on this kind of switches. Krumo seems useful, but since it always closes up the variable without making difference of how large it is, I cannot use it as is, but there might appear something similar on github soon :) Second update starts here: Any programmer who sees is_available = false will know what it means, no need to do more Bringing in color indication I forgot about one thing: the ‘switches’ let’s call them so, may me important or not. So I have right now some of them that will show in green or red, this is for something global, like caching, which is shown as Caching is… ON with ‘ON’ written in green, (and ‘OFF’ in red when disabled) while the words about what it is, i.e. ‘Caching is… ’ are written in black. And some which are not so important, for example I haven’t defined REVEAL_TIES is… not set with ‘not set’ written in gray, while the words describing what it is stay in black. And if it would be set the whole phrase would be in black since there is nothing important: if this small utility for showing some undercover things is working, I will see some messages after it, if it isn’t — site will be working independently of its state. Dividing switches into important ones and not with corresponding color match should improve readability, especially for those users who are not programmers and just enabled debug mode because some guy from bugzilla said do that — for them it would help to understand what is important and what is not.

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by Singletony
    Hi guys. We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI, i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks) ? If my question is not clear, I will gladly elaborate! THANK YOU SO MUCH!

    Read the article

  • Browser Client Side Storage aka a large Cookie

    - by Ian
    Hi, I need to store about 20-30k or data on the client side when using a website. I was using a cookie, but this is to small for my needs. Is there something else that I can use? I need to be able to do this via javascript. Server side storage is a last resort but not what I am looking for. I need it to work for Chrome, IE and firefox. Thanks Ian

    Read the article

  • Starting out NLP - Python + large data set

    - by pencilNero
    Hi, I've been wanting to learn python and do some NLP, so have finally gotten round to starting. Downloaded the english wikipedia mirror for a nice chunky dataset to start on, and have been playing around a bit, at this stage just getting some of it into a sqlite db (havent worked with dbs in the past unfort). But I'm guessing sqlite is not the way to go for a full blown nlp project(/experiment :) - what would be the sort of things I should look at ? HBase (.. and hadoop) seem interesting, i guess i could run then im java, prototype in python and maybe migrate the really slow bits to java... alternatively just run Mysql.. but the dataset is 12gb, i wonder if that will be a problem? Also looked at lucene, but not sure how (other than breaking the wiki articles into chunks) i'd get that to work.. What comes to mind for a really flexible NLP platform (i dont really know at this stage WHAT i want to do.. just want to learn large scale lang analysis tbh) ? Many thanks.

    Read the article

  • Method for finding memory leak in large Java heap dumps

    - by Rickard von Essen
    I have to find a memory leak in a Java application. I have some experience with this but would like advice on a methodology/strategy for this. Any reference and advice is welcome. About our situation: Heap dumps are larger than 1 GB We have heap dumps from 5 occasions. We don't have any test case to provoke this. It only happens in the (massive) system test environment after at least a weeks usage. The system is built on a internally developed legacy framework with so many design flaws that they are impossible to count them all. Nobody understands the framework in depth. It has been transfered to one guy in India who barely keeps up with answering e-mails. We have done snapshot heap dumps over time and concluded that there is not a single component increasing over time. It is everything that grows slowly. The above points us in the direction that it is the frameworks homegrown ORM system that increases its usage without limits. (This system maps objects to files?! So not really a ORM) Question: What is the methodology that helped you succeed with hunting down leaks in a enterprise scale application?

    Read the article

  • Getting the Item Count of a large sharepoint list in fastest way

    - by sooraj
    I am trying to get the count of the items in a sharepoint document library programatically. The scale I am working with is 30-70000 items. We have usercontrol in a smartpart to display the count . Ours is a TEAM site. This is the code to get the total count SPList VoulnterrList = web.Lists[ListTitle]; SPQuery query = new SPQuery(); query.ViewAttributes = "Scope=\"Recursive\""; string queries = "<Where><Eq><FieldRef Name='ApprovalStatus' /><Value Type='Choice'>Pending</Value></Eq></Where>"; query.Query = queries; SPListItemCollection lstitemcollAssoID = VoulnterrList.GetItems(query); lblCount.Text = "Total Proofs: " + VoulnterrList.Items.Count.ToString() + " Pending Proofs: " + lstitemcollAssoID.Count.ToString(); The problem is this has serious performance issue it takes 75 to 80 sec to load the page. if we comment this page load will decrees to 4 sec. Any better approch for this problem Ours is sharepoint 2007

    Read the article

  • Fastest way to write large STL vector to file using STL

    - by ljubak
    I have a large vector (10^9 elements) of chars, and I was wondering what is the fastest way to write such vector to a file. So far I've been using next code: vector<char> vs; // ... Fill vector with data ofstream outfile("nanocube.txt", ios::out | ios::binary); ostream_iterator<char> oi(outfile, '\0'); copy(vs.begin(), vs.end(), oi); For this code it takes approximately two minutes to write all data to file. The actual question is: "Can I make it faster using STL and how"?

    Read the article

  • How do I export a large table into 50 smaller csv files of 100,000 records each

    - by Eddie
    I am trying to export one field from a very large table - containing 5,000,000 records, for example - into a csv list - but not all together, rather, 100,000 records into each .csv file created - without duplication. How can I do this, please? I tried SELECT field_name FROM table_name WHERE certain_conditions_are_met INTO OUTFILE /tmp/name_of_export_file_for_first_100000_records.csv LINES TERMINATED BY '\n' LIMIT 0 , 100000 that gives the first 100000 records, but nothing I do has the other 4,900,000 records exported into 49 other files - and how do I specify the other 49 filenames? for example, I tried the following, but the SQL syntax is wrong: SELECT field_name FROM table_name WHERE certain_conditions_are_met INTO OUTFILE /home/user/Eddie/name_of_export_file_for_first_100000_records.csv LINES TERMINATED BY '\n' LIMIT 0 , 100000 INTO OUTFILE /home/user/Eddie/name_of_export_file_for_second_100000_records.csv LINES TERMINATED BY '\n' LIMIT 100001 , 200000 and that did not create the second file... what am I doing wrong, please, and is there a better way to do this? Should the LIMIT 0 , 100000 be put Before the first INTO OUTFILE statement, and then repeat the entire command from SELECT for the second 100,000 records, etc? Thanks for any help. Eddie

    Read the article

  • How to geocode a large number of addresses?

    - by user308569
    I need to geocode, i.e. translate street address to latitude,longitude for ~8,000 street addresses. I am using both Yahoo and Google geocoding engines at http://www.gpsvisualizer.com/geocoder/, and found out that for a large number of addresses those engines (one of them or both) either could not perform geocoding (i.e.return latitude=0,longitude=0), or return the wrong coordinates (incl. cases when Yahoo and Google give different results). What is the best way to handle this problem? Which engine is (usually) more accurate? I would appreciate any thoughts, suggestions, ideas from people who had previous experience with this kind of task.

    Read the article

  • nServiceBus with large XML messages

    - by Sean
    Hello, I have read about the true messaging and that instead of sending payload on the bus, it sends an identifier. In our case, we have a lot of legacy apps/services and those were designed to receive the payload of messages (xml) that is close to 4MB (close MSMQ limit). Is there a way for nService bus to handle large payload and persist messages automatically or another work-around, so that the publisher/subscriber services don't have to worry neither about the payload size, nor about how to de/re-hydrate the payload? Thank you in advance.

    Read the article

  • CUDA: accumulate data into a large histogram of floats

    - by shoosh
    I'm trying to think of a way to implement the following algorithm using CUDA: Working on a large volume of voxels, for each voxel I calculate an index i and a value c. after the calculation I need to perform histogram[i] += c c is a float value and the histogram can have up to 15,000 bins. I'm looking for a way to implement this efficiently using CUDA. The first obvious problem is that with compute capabilities 1.3 which is what I'm using I can't even do an atomicAdd() of floats so how can I accumulate anything reliably? This example by nVidia does something somewhat simpler. The histograms are saved in the shared memory (which I can't do due to its size) and it only accumulates integers. Can this approach be generalized to my case?

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >