Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 257/1209 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Using Active Directory to authenticate users in a WWW facing website

    - by Basiclife
    Hi, I'm looking at starting a new web app which needs to be secure (if for no other reason than that we'll need PCI accreditation at some point). From previous experience working with PCI (on a domain), the preferred method is to use integrated windows authentication which is then passed all the way through the app to the database. This allows for better auditing as well as object-level permissions (ie an end user can't read the credit card table). There are advantages in that even if someone compromises the webserver, they won't be able to glean any additional information from the database. Also, the webserver isn't storing any database credentials (beyond perhaps a simple anonymous user with very few permissions) So, now I'm looking at the new web app which will be on the public internet. One suggestion is to have a Active Directory server and create windows accounts on the AD for each user of the site. These users will then be placed into the appropriate NT groups to decide which DB permissions they should have (and which pages they can access). ASP already provides the AD membership provider and role provider so this should be fairly simple to implement. There are a number of questions around this - Scalability, reliability, etc... and I was wondering if there is anyone out there with experience of this approach or, even better, some good reasons why to do it / not to do it. Any input appreciated Regards Basiclife

    Read the article

  • .Net website create directory to remote server access denied

    - by tmfkmoney
    I have a web application that creates directories. The application works fine when creating a directory on the web server, however, it does not work when it tries to create a directory on our remote fileserver. The fileserver and the webserver are in the same domain. I have created a local user in our domain, "DOMAIN\aspnet". The local user is on both servers. I am running my .Net app pool under the domain user. I have also tried using windows impersonate in the web.config to run under the domain user. I have verified that the domain user has full control to the remote directory. In an effort to debug this I have also given the "everyone" full control to the remote directory. In an effort to debug this I have also added the domain user to the administrators group. I have a simple .net test page on the web server to test this. Through the test page I am able to read the directory on the file server and get a list of everything in it. I am not able to upload files or to create directories on the file server. Here's code that works: var path = @"\\fileserver\images\"; var di = new DirectoryInfo(path); foreach (var d in di.GetDirectories()) { Response.Write(d.Name); } Here's code that doesn't work: path = Path.Combine(path, "NewDirectory"); Directory.CreateDirectory(path); Here's the error I'm getting: Access to the path '\fileserver\images\NewDirectory' is denied. I'm pretty stuck on this. Any ideas?

    Read the article

  • Slow (to none) performance on SQL 2005 after attaching SQL 2000 database

    - by ploft
    Issue: Using the detach/attach SQL database from a SQL 2000 SP4 instance to a much beefier SQL 2005 SP2 server. Run reindex, reorganize and update statistics a couple of times, but without any success. Queries on SQL 2000 took about 1-2 sec. to complete, now the same queries take 2-3 min on the SQL 2005 (and even 2008 - tested it there also). Have looked at the execution plans and the overall percent matches or are alike on each server.

    Read the article

  • F# performance question: what is the compiler doing?

    - by Stephen Swensen
    Referencing this code: http://stackoverflow.com/questions/2840714/f-static-member-type-constraints/2842037#2842037 Why is, for example, [1L..100000L] |> List.map (fun n -> factorize gL n) significantly slower than [1L..100000L] |> List.map (fun n -> factorize (G_of 1L) n) By looking at Reflector, I can see that the compiler is treating each of these in very different ways, but there is too much going on for me to decipher the essential difference. Naively I assumed the former would perform better than the later because gL is precomputed whereas G_of 1L has to be computed 100,000 times (at least it appears that way).

    Read the article

  • IE performance issues with offsetHeight and offsetWidth

    - by Paul
    I have a site that grabs the response text from an AJAX call and does 'innerHTML' on a div that is going to contain it. After I do the 'innerHTML' I process the DIV by traversing the whole hierarchy of nodes and grabbing their [offsetWidth/offsetHeight] to do some operations with it. Why not css style width/height? because sometimes those values are not available since I don't control what is coming from the AJAX response, plus I want the real box dimensions including borders/scrolls/padding. On large injections (let's say 7,000 new DOM elements) IE takes way longer time than FF/Safari just to get this [offsetWidth/offsetHeight], actually if I wasn't doing injection but just render the contents of the HTML in the browser and processing it, it would be much faster. But that is not an option since I have to inject it on a div that will contain it. Anybody has deal with this kind of issue before? is there an alternative to innerHTML, I have try using documentFragment to inject and process and the move it to the div and still I don't see much gain. How can I get the values that are available with [offsetWidth/offsetHeight]? Thanks a bunch for any suggestions. Paul

    Read the article

  • How to parse a custom XML-style error code response from a website

    - by user1870127
    I'm developing a program that queries and prints out open data from the local transit authority, which is returned in the form of an XML response. Normally, when there are buses scheduled to run in the next few hours (and in other typical situations), the XML response generated by the page is handled correctly by the java.net.URLConnection.getInputStream() function, and I am able to print the individual results afterwards. The problem is when the buses are NOT running, or when some other problem with my queries develops after it is sent to the transit authority's web server. When the authority developed their service, they came up with their own unique error response codes, which are also sent as XMLs. For example, one of these error messages might look like this: <Error xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Code>3005</Code> <Message>Sorry, no stop estimates found for given values.</Message> </Error> (This code and similar is all that I receive from the transit authority in such situations.) However, it appears that URLConnection.getInputStream() and some of its siblings are unable to interpret this custom code as a "valid" response that I can handle and print out as an error message. Instead, they give me a more generic HTTP/1.1 404 Not Found error. This problem cascades into my program which then prints out a java.io.FileNotFoundException error pointing to the offending input stream. My question is therefore two-fold: 1. Is there a way to retrieve, parse, and print a custom XML-formatted error code sent by a web service using the plugins that are available in Java? 2. If the above is not possible, what other tools should I use or develop to handle such custom codes as described?

    Read the article

  • Markup filter wanted for a public website

    - by sibidiba
    Developing a community site where everyone can post text, I'm looking for a markup filter: What is not part of the markup must be escaped (htmlspecialchars()) as it is. Should turn URL-s automatically into links Should support some form of basic markups (bold, image, url, pre, list) Should have a simple parser, that turns user input text into HTML Content on the site is public to everyone, XSS must not allowed to happen. What do you suggest? What markup language in the first place? BBCode? Wiki? Markdown? Are there any complete API-s with good examples? PHP is available on the server side. If there is a WYSIWYG-like texarea in addition (like here on SO) that would be a fantastic bonus!

    Read the article

  • NSURLConnection on simulator and iphone performance issues

    - by Nava Carmon
    I'm experiencing a weird problem and i wonder if anybody else has noticed this: I'm using NSURLConnection as it appears in apple's examples to get xml files from a certain server - pretty straight forward. And most of time it works, but sometimes it's just stuck after initialing and don't get into connection's delegate methods. I'm working with WiFi & 3G and the same server all the time. When it comes to didFailWithError i see that mostly it was a timeout error. When I enter same link in Safari it takes a second to bring data. And after another trial I can access the link. What might be the reason for such a weird behavior? How can I improve it? What is the role of cache policy with NSURLConnection? Thanks, Nava

    Read the article

  • clam anti-virus is slowing down my server performance

    - by Scarface
    Hey guys, I just installed clam av http://sourceforge.net/projects/php-clamav/ for scanning file uploads on my linux VPN running php. The problem is that for some reason just initiating the extension in the php ini file slows down my entire network. Regular requests such as changing pages that should take less than 1 second take 5. Has anyone ever experienced this before or have a good virus scanning alternative for scanning file uploads? extension=clamav.so [clamav] clamav.dbpath="/usr/share/clamav" clamav.keeptmp=20 clamav.maxreclevel=16 clamav.maxfiles=10000 clamav.maxfilesize=26214400 clamav.maxscansize=104857600 clamav.keeptmp=0

    Read the article

  • boost multi_index_container and erase performance

    - by rjoshi
    I have a boost multi_index_container declared as below which is indexed by hash_unique id(unsigned long) and hash_non_unique transaction id(long). Insertion and retrieval of elements is fast but when I delete elements, it is much slower. I was expecting it to be constant time as key is hashed. e.g To erase elements from container for 10,000 elements, it takes around 2.53927016 seconds for 15,000 elements, it takes around 7.137100068 seconds for 20,000 elements, it takes around 21.391720757 seconds Is it something I am missing or is it expected behavior? class Session { public: Session() { //increment unique id static unsigned long counter = 0; boost::mutex::scoped_lock guard(mx); counter++; m_nId = counter; } unsigned long GetId() { return m_nId; } long GetTransactionHandle(){ return m_nTransactionHandle; } .... private: unsigned long m_nId; long m_nTransactionHandle; boost::mutext mx; .... }; typedef multi_index_container< Session*, indexed_by< hashed_unique< mem_fun<Session,unsigned long,&Session::GetId> >, hashed_non_unique< mem_fun<Session,unsigned long,&Session::GetTransactionHandle> > > //end indexed_by > SessionContainer; typedef SessionContainer::nth_index<0>::type SessionById; int main() { ... SessionContainer container; SessionById *pSessionIdView = &get<0>(container); unsigned counter = atoi(argv[1]); vector<Session*> vSes(counter); //insert for(unsigned i = 0; i < counter; i++) { Session *pSes = new Session(); container.insert(pSes); vSes.push_back(pSes); } timespec ts; lock_settime(CLOCK_PROCESS_CPUTIME_ID, &ts); //erase for(unsigned i = 0; i < counter; i++) { pSessionIdView->erase(vSes[i]->getId()); delete vSes[i]; } lock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts); std::cout << "Total time taken for erase:" << ts.tv_sec << "." << ts.tv_nsec << "\n"; return (EXIST_SUCCESS); }

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • Unexpected performance curve from CPython merge sort

    - by vkazanov
    I have implemented a naive merge sorting algorithm in Python. Algorithm and test code is below: import time import random import matplotlib.pyplot as plt import math from collections import deque def sort(unsorted): if len(unsorted) <= 1: return unsorted to_merge = deque(deque([elem]) for elem in unsorted) while len(to_merge) > 1: left = to_merge.popleft() right = to_merge.popleft() to_merge.append(merge(left, right)) return to_merge.pop() def merge(left, right): result = deque() while left or right: if left and right: elem = left.popleft() if left[0] > right[0] else right.popleft() elif not left and right: elem = right.popleft() elif not right and left: elem = left.popleft() result.append(elem) return result LOOP_COUNT = 100 START_N = 1 END_N = 1000 def test(fun, test_data): start = time.clock() for _ in xrange(LOOP_COUNT): fun(test_data) return time.clock() - start def run_test(): timings, elem_nums = [], [] test_data = random.sample(xrange(100000), END_N) for i in xrange(START_N, END_N): loop_test_data = test_data[:i] elapsed = test(sort, loop_test_data) timings.append(elapsed) elem_nums.append(len(loop_test_data)) print "%f s --- %d elems" % (elapsed, len(loop_test_data)) plt.plot(elem_nums, timings) plt.show() run_test() As much as I can see everything is OK and I should get a nice N*logN curve as a result. But the picture differs a bit: Things I've tried to investigate the issue: PyPy. The curve is ok. Disabled the GC using the gc module. Wrong guess. Debug output showed that it doesn't even run until the end of the test. Memory profiling using meliae - nothing special or suspicious. ` I had another implementation (a recursive one using the same merge function), it acts the similar way. The more full test cycles I create - the more "jumps" there are in the curve. So how can this behaviour be explained and - hopefully - fixed? UPD: changed lists to collections.deque UPD2: added the full test code UPD3: I use Python 2.7.1 on a Ubuntu 11.04 OS, using a quad-core 2Hz notebook. I tried to turn of most of all other processes: the number of spikes went down but at least one of them was still there.

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • share the same cookie between two website using PHP cURL extension

    - by powerboy
    I want to get the contents of some emails in my gmail account. I would like to use the PHP cURL extension to do this. I followed these steps in my first try: In the PHP code, output the contents of https://www.google.com/accounts/ServiceLoginAuth. In the browser, the user input username and password to login. In the PHP code, save cookies in a file named cookie.txt. In the PHP code, send request to https://mail.google.com/ along with cookies retrieved from cookie.txt and output the contents. The following code does not work: $login_url = 'https://www.google.com/accounts/ServiceLoginAuth'; $gmail_url = 'https://mail.google.com/'; $cookie_file = dirname(__FILE__) . '/cookie.txt'; $ch = curl_init(); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie_file); curl_setopt($ch, CURLOPT_URL, $login_url); $output = curl_exec($ch); echo $output; curl_setopt($ch, CURLOPT_URL, $gmail_url); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie_file); $output = curl_exec($ch); echo $output; curl_close($ch);

    Read the article

  • FormsAuthentication redirecting to login page when visiting root of website

    - by Ryan Lattimer
    I wanted to use FormsAuthentication to secure my static files as well on my site, so I followed the instructions located here http://learn.iis.net/page.aspx/244/how-to-take-advantage-of-the-iis7-integrated-pipeline/ under title "Enabling Forms Authentication for the Entire Application". Now though, when I try to visit the site by going directly to http://www.mysite.com I get redirected to http://www.mysite.com/Login.aspx?ReturnUrl=%2f instead of it using my DefaultDocument I have set. I can go to my default document by just visiting http://www.mysite.com/Home.aspx without any issues because it is set to allow anonymous access. Is there something I need to add into my web.config file to make iis7 allow anonymous access to the root? I tried adding with anonymous access but no such luck. Any help would be much appreciated. Both Home and the Login form allow anonymous. <location path="Home.aspx"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="Login.aspx"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> Login form is set as the loginUrl <authentication mode="Forms"> <forms protection="All" loginUrl="Login.aspx"> </forms> </authentication> Default document is set as Home.aspx <defaultDocument> <files> <add value="Home.aspx" /> </files> </defaultDocument> I have not removed any of the iis7 default documents. However, Home.aspx is first in the priority.

    Read the article

  • Optimizing Smart Client Performance

    - by Burt
    I have a smart client (WPF) that makes calls to the server va services (WCF). The screen I am working on holds a list of objects that it loads when the constructor is called. I am able to add, edit and delete records in the list. Typically what I am doing is after every add or delete I am reloading the entire model from the service again, there are a number off reasons for this including the fact that the data may have changed on the server between calls. This approach has proved to be a big hit on perfomance because I am loading everything sending the list up and down the wire on Add and Edit. What other options are open to me, should I only be send the required information to the server and how would I go about not reloading all the data again ever time an add or delete is performed?

    Read the article

  • Very slow remote page load performance in QtWebKit (Windows)

    - by Gil
    I get very slow load times on QtWebKit on Windows 7, through ADSL. I am using the Qt Demo Browser, on a Core2 Quad, 64 bit Windows 7, 4GB ram, 2gb processor. through a VPN. Simplest example: google search page takes ~18 seconds to load, compared to 2.5 on Chrome (cash cleared). On larger pages, with scripts etc. it is worse. I tried Qt 4.6 and also the Qt 4.7 beta, but don't see any difference. I see the same results with Arora browser. Are there any settings, or patches that can be applied to fix this? Thanks

    Read the article

  • Cannot see my wordpress website on google search

    - by ion
    Hi guys I recently uploaded a site made with wordpress. The site url is oakabeachvolley.gr I have set on the privacy settings of wordpress for the site to be visible by search engines. However after almost 45 days the site is invisible on google even when I'm searching using the url name and very specific keywords. Since I have made quite a few sites with wordpress I have never seen this behavior before. Sites will eventually be visible to google engine, sometimes even in the first day. However in this case the site does not show nowhere in the first 20 pages. Any help would be greatly appreciated.

    Read the article

  • How to get non-latin characters from website?

    - by latata
    I try to get data from latata.pl/pl.php and view all sign (polish - iso-8859-2) final URL url = new URL("http://latata.pl/pl.php"); final URLConnection urlConnection = url.openConnection(); final BufferedReader in = new BufferedReader(new InputStreamReader( urlConnection.getInputStream())); String inputLine; while ((inputLine = in.readLine()) != null) { System.out.println(inputLine); } in.close(); It doesn't work. :( Any ideas?

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >