Search Results

Search found 1999 results on 80 pages for 'temporary'.

Page 66/80 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Why does my javascript file sometimes compressed while sometimes not?(IIS Gzip problem)

    - by Kevin Yang
    i enable gzip for javascript file in my iis settings, here 's the corresponding config section. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="10" dynamicCompressionLevel="8" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/soap+msbin1" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> currently, when i download my js file, it seems that sometimes server return the gzip one, and sometimes not. i dont know why, and how to debug that. If a file is already gzipped, it should be cached in local disk, and next time someone visit that file again, iis kernel should return the cache gzip file directly without compressing it again. Is that right?

    Read the article

  • Javascript getElementsByTagName broken firefox?

    - by Sheldon Ross
    I'm getting the weirdest issues with Javascript in Firefox today. I'm trying to manipulate some table rows, but .getElementsByTagName("tr"); is pulling back junk. dynamicTable.tableBody = dynamicTable.getElementsByTagName("tbody")[0]; var tableRows = dynamicTable.tableBody.getElementsByTagName("TR"); var actualTableRows = new Array(); for(var i in tableRows) { var row = tableRows[i]; alert(row.tagName); if(row.tagName == "TR"){ actualTableRows.push(row); } } dynamicTable.bodyRows = actualTableRows; The puzzling part of course is my temporary hack to fix the error. For some reason .getElementsByTagName("tr") is pulling back some functions also. Incidently the alert above goes something like this TR TR TR TR undefined undefined undefined. The code I wanted was something like this dynamicTable.bodyRows = dynamicTable.tableBody.getElementsByTagName("tr"); But then bodyrows does not contain just tr elements it has the aforementioned junk in it. Any thoughts?

    Read the article

  • Passing data between objects in Chain of Responsibility pattern

    - by AbrahamJP
    While implementing the Chain of Responsibility pattern, i came across a dilemma om how to pass data between objects in the chain. The datatypes passed between object in the chain can differ for each object. As a temporary fix I had created a Static class containing a stack where each object in the chain can push the results to the stack while the next object in the chain could pop the results from the stack. Here is a sample code on what I had implemented. public interface IHandler { void Process(); } public static class StackManager { public static Stack DataStack = new Stack(); } //This class doesn't require any input to operate public class OpsA : IHandler { public IHandler Successor {get; set; } public void Process() { //Do some processing, store the result into Stack var ProcessedData = DoSomeOperation(); StackManager.DataStack.Push(ProcessedData); if(Successor != null) Successor(); } } //This class require input data to operate upon public class OpsB : IHandler { public IHandler Successor {get; set; } public void Process() { //Retrieve the results from the previous Operation var InputData = StackManager.DataStack.Pop(); //Do some processing, store the result into Stack var NewProcessedData = DoMoreProcessing(InputData); StackManager.DataStack.Push(NewProcessedData); if(Successor != null) Successor(); } } public class ChainOfResponsibilityPattern { public void Process() { IHandler ProcessA = new OpsA(); IHandler ProcessB = new OpsB(); ProcessA.Successor = ProcessB; ProcessA.Process(); } } Please help me to find a better approach to pass data between handlers objects in the chain.

    Read the article

  • Avoid MySQL multi-results from SP with Execute

    - by hhyhbpen
    Hi, i have an SP like BEGIN DECLARE ... CREATE TEMPORARY TABLE tmptbl_found (...); PREPARE find FROM" INSERT INTO tmptbl_found (SELECT userid FROM ( SELECT userid FROM Soul WHERE .?.?. ORDER BY .?.?. ) AS left_tbl LEFT JOIN Contact ON userid = Contact.userid WHERE Contact.userid IS NULL LIMIT ?) "; DECLARE iter CURSOR FOR SELECT userid, ... FROM Soul ...; ... l:LOOP FETCH iter INTO u_id, ...; ... EXECUTE find USING ...,. . .,u_id,...; ... END LOOP; ... END// and it gives multi-results. Besides it's inconvenient, if i get all this multi-results (which i really don't need at all), about 5 (limit's param) for each of the hundreds of thousands of records in Soul, i'm afraid it will take all my memory (and all in vain). Also, i noticed, if i do prepare from an empty string, it still has multi-results... At least how to get rid of them in the execute statement? And i would like to have a recipe to avoid ANY output from SP, for any possible statement (i also have a lot of "update ..."s and "select ... into "s inside, if they can produce multi's). Tnx for any help...

    Read the article

  • Commercial web application--scalable database design

    - by Rob Campbell
    I'm designing a set of web apps to track scientific laboratory data. Each laboratory has several members, each of whom will access both their own data and that of their laboratory as a whole. Many typical queries will thus be expected to return records of multiple members (e.g. my mouse, joe's mouse and sally's mouse). I think I have the database fairly well normalized. I'm now wondering how to ensure that users can efficiently access both their own data and their lab's data set when it is mixed among (hopefully) a whole ton of records from other labs. What I've come up with so far is that most tables will end with two fields: user_id and labgroup_id. The WHERE clause of any SELECT statement will include the appropriate reference to one of the id fields ("...WHERE 'labroup_id=n..." or "...WHERE user_id=n..."). My questions are: Is this an approach that will scale to 10^6 or more records? If so, what's the best way to use these fields in a query so that it most efficiently searches the relevant subset of the database? e.g. Should the first step in querying be to create a temporary table containing just the labgroup's data? Or will indexing using some combination of the id, user_id, and labroup_id fields be sufficient at that scale? I thank any responders very much in advance.

    Read the article

  • How to syntax-highlight XML in CDATA elements in Vim?

    - by Jim Hurne
    Vim's syntax highlighting for XML/XSL is great, except it turns off all syntax highlighting in CDATA regions. Is there a way to turn on syntax highlighting on in CDATA regions? At work, we have a lot of XSL code embedded within other XML documents. It would be great if I could get all of the goodness of XML editing for the embedded XSL code as well without having to temporarily remove the CDATA tags, or copy the CDATA content into a temporary file. Example: <root> <someTag><![CDATA[ <xsl:template match="/"> <!-- XSL content here --> </xsl:template> ]]> </someTag> </root> Note that the name of the tag (in the example, someTag) containing the content could be anything. We also sometimes embed Javascript inside CDATA regions as well, and again, it would be nice to turn on Javascript syntax highlighting for those regions. Again, the tag the data is embedded in is usually arbitrary and can be anything.

    Read the article

  • Programatically rebuild .exd-files when loading VBA

    - by aspartame
    Hi, After updating Microsoft Office 2007 to Office 2010 some custom VBA scripts embedded in our software failed to compile with the following error message: Object library invalid or contains references to object definitions that could not be found. As far as I know, this error is a result of a security update from Microsoft (Microsoft Security Advisory 960715). When adding ActiveX-controls to VBA scripts, information about the controls are stored in cache files on the local hard drive (.exd-files). The security update modified some of these controls, but the .exd-files were not automatically updated. When the VBA scripts try to load the old versions of the controls stored in the cached files, the error occurs. These cache-files must be removed from the hard drive in order for the controls to load successfully (which will create new, updated .exd-files automatically). What I would like to do is to programatically (using Visual C++) remove the outdated .exd-files when our software loads. When opening a VBA project using CApcProject::ApcProject.Open I set the following flag:axProjectThrowAwayCompiledState. TestHR(ApcProject.Open(pHost, (MSAPC::AxProjectFlag) (MSAPC::axProjectNormal | MSAPC::axProjectThrowAwayCompiledState))); According to the documentation, this flag should cause the VBA project to be recompiled and the temporary files to be deleted and rebuilt. I've also tried to update the checksum of the host application type library which should have the same effect. However none of these fixes seem to do the job and I'm running out of ideas. Help is very much appreciated!

    Read the article

  • is using private shared objects/variables on class level harmful ?

    - by haansi
    Hello, Thanks for your attention and time. I need your opinion on an basic architectural issue please. In page behind classes I am using a private and shared object and variables (list or just client or simplay int id) to temporary hold data coming from database or class library. This object is used temporarily to catch data and than to return, pass to some function or binding a control. 1st: Can this approach harm any way ? I couldn't analyze it but a thought was using such shared variables may replace data in it when multiple users may be sending request at a time? 2nd: Please comment also on using such variables in BLL (to hold data coming from DAL/database). In this example every time new object of BLL class will be made. Here is sample code: public class ClientManager { Client objclient = new Client(); //Used in 1st and 2nd method List<Client> clientlist = new List<Client>();// used in 3rd and 4th method ClientRepository objclientRep = new ClientRepository(); public List<Client> GetClients() { return clientlist = objclientRep.GetClients(); } public List<Client> SearchClients(string Keyword) { return clientlist = objclientRep.SearchClients(Keyword); } public Client GetaClient(int ClientId) { return objclient = objclientRep.GetaClient(ClientId); } public Client GetClientDetailForConfirmOrder(int UserId) { return objclientRep.GetClientDetailForConfirmOrder(UserId); } } I am really thankful to you for sparing time and paying kind attention.

    Read the article

  • FileNotFoundException Java

    - by Troels Hansen
    Hi, I'm trying to make a simple highscore system for a minesweeper game. However i keep getting a file not found exception, and i've tried to use the full path for the file aswell. package minesweeper; import java.io.*; import java.util.*; public class Highscore{ public static void submitHighscore(String difficulty) throws IOException{ int easy = 99999; int normal = 99999; int hard = 99999; //int newScore = (int) MinesweeperView.getTime(); int newScore = 10; File f = new File("Highscores.dat"); if (!f.exists()){ f.createNewFile(); } Scanner input = new Scanner(f); PrintStream output = new PrintStream(f); if (input.hasNextInt()){ easy = input.nextInt(); normal = input.nextInt(); hard = input.nextInt(); } output.flush(); if(difficulty.equals("easy")){ if (easy > newScore){ easy = newScore; } }else if (difficulty.equals("normal")){ if (normal > newScore){ normal = newScore; } }else if (difficulty.equals("hard")){ if (hard > newScore){ hard = newScore; } } output.println(easy); output.println(normal); output.println(hard); } //temporary main method used for debugging public static void main(String[] args) throws IOException { submitHighscore("easy"); } }

    Read the article

  • JVM GC demote object to eden space?

    - by Kevin
    I'm guessing this isn't possible...but here goes. My understanding is that eden space is cheaper to collect than old gen space, especially when you start getting into very large heaps. Large heaps tend to come up with long running applications (server apps) and server apps a lot of the time want to use some kind of caches. Caches with some kind of eviction (LRU) tend to defeat some assumptions that GC makes (temporary objects die quickly). So cache evictions end up filling up old gen faster than you'd like and you end up with a more costly old gen collection. Now, it seems like this sort of thing could be avoided if java provided a way to mark a reference as about to die (delete keyword)? The difference between this and c++ is that the use is optional. And calling delete does not actually delete the object, but rather is a hint to the GC that it should demote the object back to Eden space (where it will be more easily collected). I'm guessing this feature doesn't exist, but, why not (is there a reason it's a bad idea)?

    Read the article

  • SQL Server compare table entries for update

    - by Dave
    I have a trade table with several million rows. Each row represents the version of a trade. If I'm given a possibly new trade I compare it to the latest version in the trade table. If it has changed I add a new version, otherwise I do nothing. In order to compare the 2 trades I read the version from the trade table into my application. This doesn't work well when I'm given 10s of thousands of possibly new trades. Even batching reads to read in a 1000 trades at once and compare them the whole process can take several minutes. All the time is spent in the DB. I'm trying to find a way to compare the possibly new trades to the ones in the trade table without so much I/O. What I've come up with so far is adding a hash column to each row in the trade table. The hash is of all the trade fields. Then when I'm given possibly new trades I compute their hash, put the values into a temporary table, then find ones that are different. This feels very hacky. Is there a better way of doing it? Thanks

    Read the article

  • how i can retrive files from folder on hard-disk and how to display uplaoded file data into a textar

    - by Deepak Narwal
    I have made a application form in which i am asking for username,password,email id and user's resume.Now after uploading resume i am storing it into hard disk into htdocs/uploadedfiles/..in a format something like this username_filename.In database i am storing file name,file size,file type.Some coading for this i am showing here $filesize=$_FILES['file']['size']; $filename=$_FILES['file']['name']; $filetype=$_FILES['file']['type']; $temp_name=$_FILES['file']['tmp_name']; //temporary name of uploaded file $pwd_hash = hash('sha1',$_POST['password']); $target_path = "uploadedfiles/"; $target_path = $target_path.$_POST['username']."_".basename( $_FILES['file']['name']); move_uploaded_file($_FILES['file']['tmp_name'], $target_path) ; $sql="insert into employee values ('NULL','{$_POST[username]}','{$pwd_hash}','{$filename}','{$filetype}','$filesize',NOW())"; Now i have two questions 1.NOw how i can display this file data into a textarea(something like naukri.com resume section) 2.How one can retrive that resume file from folder on hard-disk.What query should i write to fetch this file from that folder.I know how to retrive data from database but i dont know how to retrive data from a folder in hard-disk like in the case if user want to delete this file or he wnat to download this file.How i can do this

    Read the article

  • Limit TCP requests per IP

    - by asmo
    Hello! I'm wondering how to limit the TCP requests per client (per specific IP) in Java. For example, I would like to allow a maximum of X requests per Y seconds for each client IP. I thought of using static Timer/TimerTask in combination with a HashSet of temporary restricted IPs. private static final Set<InetAddress> restrictedIPs = Collections.synchronizedSet(new HashSet<InetAddress>()); private static final Timer restrictTimer = new Timer(); So when a user connects to the server, I add his IP to the restricted list, and start a task to unrestrict him in X seconds. restrictedIPs.add(socket.getInetAddress()); restrictTimer.schedule(new TimerTask() { public void run() { restrictedIPs.remove(socket.getInetAddress()); } }, MIN_REQUEST_INTERVAL); My problem is that at the time the task will run, the socket object may be closed, and the remote IP address won't be accessible anymore... Any ideas welcomed! Also, if someone knows a Java-framework-built-in way to achieve this, I'd really like to hear it.

    Read the article

  • Impersonate SYSTEM (or equivalent) from Administrator Account

    - by KevenK
    This question is a follow up and continuation of this question about a Privilege problem I'm dealing with currently. Problem Summary: I'm running a program under a Domain Administrator account that does not have Debug programs (SeDebugPrivilege) privilege, but I need it on the local machine. Klugey Solution: The program can install itself as a service on the local machine, and start the service. Said service now runs under the SYSTEM account, which enables us to use our SeTCBPrivilege privilege to create a new access token which does have SeDebugPrivilege. We can then use the newly created token to re-launch the initial program with the elevated rights. I personally do not like this solution. I feel it should be possible to acquire the necessary privileges as an Administrator without having to make system modifications such as installing a service (even if it is only temporary). I am hoping that there is a solution that minimizes system modifications and can preferably be done on the fly (ie: Not require restarting itself). I have unsuccessfully tried to LogonUser as SYSTEM and tried to OpenProcessToken on a known SYSTEM process (such as csrss.exe) (which fails, because you cannot OpenProcess with PROCESS_TOKEN_QUERY to get a handle to the process without the privileges I'm trying to acquire). I'm just at my wit's end trying to come up with an alternative solution to this problem. I was hoping there was an easy way to grab a privileged token on the host machine and impersonate it for this program, but I haven't found a way. If anyone knows of a way around this, or even has suggestions on things that might work, please let me know. I really appreciate the help, thanks!

    Read the article

  • Giving writing permissions for IIS user at Windows 2003 Server

    - by Steve
    I am running a website over Windows 2003 Server and IIS6 and I am having problems to write or delete files in some temporary folder obtaining this kind of warmings: Warning: unlink(C:\Inetpub\wwwroot\cakephp\app\tmp\cache\persistent\myapp_cake_core_cake_): Permission denied in C:\Inetpub\wwwroot\cakephp\lib\Cake\Cache\Engine\FileEngine.php on line 254 I went to the tmp directory and at the properties I gave the IIS User the following permissions: Read & Execute List folder Contents Read And it still showing the same warnings. When I am on the properties window, if I click on Advanced the IIS username appears twice. One with Allow type and read & execute permissions and the other with Deny type and Special permissions. My question is: Should I give this user not only the Read & Execute permissions but also this ones?: Create Attributes Create Files/ Write Data Create Folders/ Append Data Delete Subfolders and Files Delete They are available to select if I Click on the edit button over the username. Wouldn't I be opening a security hole if I do this? Otherwise, how can I do to read and delete the files my website uses? Thanks.

    Read the article

  • MySQL "OR MATCH" hangs (very slow) on multiple tables

    - by Kerry
    After learning how to do MySQL Full-Text search, the recommended solution for multiple tables was OR MATCH and then do the other database call. You can see that in my query below. When I do this, it just gets stuck in a "busy" state, and I can't access the MySQL database. SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry, MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) AS relevance FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = %d AND b.`status` = %d AND b.`active` = %d AND MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) OR MATCH ( d.`name` ) AGAINST ( '%s' IN BOOLEAN MODE ) GROUP BY a.`product_id` ORDER BY relevance DESC LIMIT 0, 9 Any help would be greatly appreciated. EDIT All the tables involved are MyISAM, utf8_general_ci. Here's the EXPLAIN SELECT statement: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY a ALL NULL NULL NULL NULL 16076 Using temporary; Using filesort 1 PRIMARY b ref product_id product_id 4 database.a.product_id 2 1 PRIMARY e eq_ref PRIMARY PRIMARY 4 database.a.industry_id 1 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 23261 1 PRIMARY d eq_ref PRIMARY PRIMARY 4 database.a.brand_id 1 Using where 2 DERIVED product_images ALL NULL NULL NULL NULL 25933 Using where I don't know how to make that look neater -- sorry about that UPDATE it returns the query after 196 seconds (I think correctly). The query without multiple tables takes about .56 seconds (which I know is really slow, we plan on changing to solr or sphinx soon), but 196 seconds?? If we could add a number to the relevance if it was in the brand name ( d.name ), that would also work

    Read the article

  • Some languages don't work when using Word 2007 Spellcheck from Interop

    - by Tridus
    I'm using the Word 2007 spellchecker via Interop in a VB.net desktop app. When using the default language (English), it works fine. If I set the language to French via LanguageId, it also works. But if I set it to French (Canadian) (Word.WdLanguageID.wdFrenchCanadian), it doesn't work. There's no error message, it simply runs and says the document contains no errors. I know it does, if I paste the exact same text into Word itself and run it with the French (Canadian) dictionary, it finds errors. Just why that dictionary doesn't work is kind of a mystery to me. Full code below: Public Shared Function SpellCheck(ByVal text As String, ByVal checkGrammar As Boolean) As String ' If there is no data to spell check, then exit sub here. If text.Length = 0 Then Return text End If Dim objWord As Word.Application Dim objTempDoc As Word.Document ' Declare an IDataObject to hold the data returned from the ' clipboard. Dim iData As IDataObject objWord = New Word.Application() objTempDoc = objWord.Documents.Add objWord.Visible = False ' Position Word off the screen...this keeps Word invisible ' throughout. objWord.WindowState = 0 objWord.Top = -3000 ' Copy the contents of the textbox to the clipboard Clipboard.SetDataObject(text) ' With the temporary document, perform either a spell check or a ' complete ' grammar check, based on user selection. With objTempDoc .Content.Paste() .Activate() .Content.LanguageID = Word.WdLanguageID.wdFrenchCanadian If checkGrammar Then .CheckGrammar() Else .CheckSpelling() End If ' After user has made changes, use the clipboard to ' transfer the contents back to the text box .Content.Copy() iData = Clipboard.GetDataObject If iData.GetDataPresent(DataFormats.Text) Then text = CType(iData.GetData(DataFormats.Text), _ String) End If .Saved = True .Close() End With objWord.Quit() Return text End Function

    Read the article

  • In SQL can I return a tables with a varying number of columns

    - by Matt
    I have a somewhat more complicated scenario, but I think it should be possible. I have a large SPROC whose result is a set of characteristics for a set of persons. So the Table would look something like this: Property | Client1 Client 2 Client3 ----------------------------------------------------------- Sex | M F M Age | 67 56 67 Income | Low Mid Low It's built using cursors, iterating over different datasets. The problem I am facing is that there is a varying number of Clients and Properties, so an equally valid result over different input sets might be: Property | Client1 Client 2 ------------------------------------------- Sex | M F Age | 67 56 Weight | 122 122 The different number of properties is easy, those are just extra rows. My problem is that I need to declare a temporary table with a varying number of columns. There could be 2 clients or 100. Every client in guaranteed to have every property ultimately listed. What SQL structure would statisfy this and how can I declare it and insert things into it? I can't just flip the columns and rows either because there is a variable number of each.

    Read the article

  • Shuffling words in a sentence in javascript (coding horror - How to improve?)

    - by Bill Zimmerman
    Hi, I'm trying to do something that is fairly simple, but my code looks terrible and I am certain there is a better way to do things in javascript. I am new to javascript, and am trying to improve my coding. This just feels very messy. All I want to do is to randomly change the order some words on a web page. In python, the code would look something like this: s = 'THis is a sentence' shuffledSentence = random.shuffle(s.split(' ')).join(' ') However, this is the monstrosity I've managed to produce in javascript //need custom sorting function because javascript doesn't have shuffle? function mySort(a,b) { return a.sortValue - b.sortValue; } function scrambleWords() { var content = $.trim($(this).contents().text()); splitContent = content.split(' '); //need to create a temporary array of objects to make sorting easier var tempArray = new Array(splitContent.length); for (var i = 0; i < splitContent.length; i++) { //create an object that can be assigned a random number for sorting var tmpObj = new Object(); tmpObj.sortValue = Math.random(); tmpObj.string = splitContent[i]; tempArray[i] = tmpObj; } tempArray.sort(mySort); //copy the strings back to the original array for (i = 0; i < splitContent.length; i++) { splitContent[i] = tempArray[i].string; } content = splitContent.join(' '); //the result $(this).text(content); } Can you help me to simplify things?

    Read the article

  • Improving performance in this query

    - by Luiz Gustavo F. Gama
    I have 3 tables with user logins: sis_login = administrators tb_rb_estrutura = coordinators tb_usuario = clients I created a VIEW to unite all these users by separating them by levels, as follows: create view `login_names` as select `n1`.`cod_login` as `id`, '1' as `level`, `n1`.`nom_user` as `name` from `dados`.`sis_login` `n1` union all select `n2`.`id` as `id`, '2' as `level`, `n2`.`nom_funcionario` as `name` from `tb_rb_estrutura` `n2` union all select `n3`.`cod_usuario` as `id`, '3' as `level`, `n3`.`dsc_nome` as `name` from `tb_usuario` `n3`; So, can occur up to three ids repeated for different users, which is why I separated by levels. This VIEW is just to return me user name, according to his id and level. considering it has about 500,000 registered users, this view takes about 1 second to load. too much time, but is becomes very small when I need to return the latest posts on the forum of my website. The tables of the forums return the user id and level, then look for a name in this VIEW. I have registered 18 forums. When I run the query, it takes one second for each forum = 18 seconds. OMG. This page loads every time somebody enter my website. This is my query: select `x`.`forum_id`, `x`.`topic_id`, `l`.`nome` from ( select `t`.`forum_id`, `t`.`topic_id`, `t`.`data`, `t`.`user_id`, `t`.`user_level` from `tb_forum_topics` `t` union all select `a`.`forum_id`, `a`.`topic_id`, `a`.`data`, `a`.`user_id`, `a`.`user_level` from `tb_forum_answers` `a` ) `x` left outer join `login_names` `l` on `l`.`id` = `x`.`user_id` and `l`.`level` = `x`.`user_level` group by `x`.`forum_id` asc USING EXPLAIN: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 6 Using temporary; Using filesort 1 PRIMARY <derived4> ALL NULL NULL NULL NULL 530415 4 DERIVED n1 ALL NULL NULL NULL NULL 114 5 UNION n2 ALL NULL NULL NULL NULL 2 6 UNION n3 ALL NULL NULL NULL NULL 530299 NULL UNION RESULT ALL NULL NULL NULL NULL NULL 2 DERIVED t ALL NULL NULL NULL NULL 3 3 UNION r ALL NULL NULL NULL NULL 3 NULL UNION RESULT ALL NULL NULL NULL NULL NULL Somebody can help me or give a suggestion?

    Read the article

  • PHP file upload issue

    - by Varun
    I am working on a PHP based, ticket management system. While creating a ticket, one can upload an attachment. I want to put a limit (say 10 MB) per file upload. To implement this I plan the following- 1. In php.ini set post_max_size = 10M 2.In PHP script which receives the POST- Since the file is larger than post_max_size, $_FILES[] will be empty. But I can still check the content-length header and discard the upload, if size more than 10M. While testing this I tried uploading a file of 1 GB and analysed the http traffic and this is what I found. - the entire 1 GB data is first uploaded to a to the server temporarily and discarded once the http request completes. Though I couldn't exactly find out where the file was getting saved(as it was not there in the temporary directory in the server.), but my http traffic analyzer showed that the browser did send 1 GB data to the server. - the PHP script execution started only after completion of the http request(i.e after uploading the entire 1 GB) Now I have 2 concerns: a) People may exploit my server bandwidth by trying to upload large file, which I will have to discard anyways. b) Even worse, if someone starts uploading a huge file (say 100 GB), entire 100 GB data is first uploaded to the server temporarily, that means for that period, it will consume that much of memory on my server. What's the common solution for this. Am I missing something here?

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • Is there a FAST way to export and install an app on my phone, while signing it with my own keystore?

    - by Alexei Andreev
    So, I've downloaded my own application from the market and installed it on my phone. Now, I am trying to install a temporary new version from Eclipse, but here is the message I get: Re-installation failed due to different application signatures. You must perform a full uninstall of the application. WARNING: This will remove the application data! Please execute 'adb uninstall com.applicationName' in a shell. Launch canceled! Now, I really really don't want to uninstall the application, because I will lose all my data. One solution I found is to Export my application, creating new .apk, and then install it via HTC Sync (probably a different program based on what phone you have). The problem is this takes a long time to do, since I need to enter the password for the keystore each time and then wait for HTC Sync. It's a pain in the ass! So the question is: Is there a way to make Eclipse automatically use my keystore to sign the application (quickly and automatically)? Or perhaps to replace debug keystore with my own? Or perhaps just tell it to remember the password, so I don't have to enter it every time...? Or some other way to solve this problem?

    Read the article

  • How to exclude rows where matching join is in an SQL tree

    - by Greg K
    Sorry for the poor title, I couldn't think how to concisely describe this problem. I have a set of items that should have a 1-to-1 relationship with an attribute. I have a query to return those rows where the data is wrong and this relationship has been broken (1-to-many). I'm gathering these rows to fix them and restore this 1-to-1 relationship. This is a theoretical simplification of my actual problem but I'll post example table schema here as it was requested. item table: +------------+------------+-----------+ | item_id | name | attr_id | +------------+------------+-----------+ | 1 | BMW 320d | 20 | | 1 | BMW 320d | 21 | | 2 | BMW 335i | 23 | | 2 | BMW 335i | 34 | +------------+------------+-----------+ attribute table: +---------+-----------------+------------+ | attr_id | value | parent_id | +---------+-----------------+------------+ | 20 | SE | 21 | | 21 | M Sport | 0 | | 23 | AC | 24 | | 24 | Climate control | 0 | .... | 34 | Leather seats | 0 | +---------+-----------------+------------+ A simple query to return items with more than one attribute. SELECT item_id, COUNT(DISTINCT(attr_id)) AS attributes FROM item GROUP BY item_id HAVING attributes > 1 This gets me a result set like so: +-----------+------------+ | item_id | attributes | +-----------+------------+ | 1 | 2 | | 2 | 2 | | 3 | 2 | -- etc. -- However, there's an exception. The attribute table can hold a tree structure, via parent links in the table. For certain rows, parent_id can hold the ID of another attribute. There's only one level to this tree. Example: +---------+-----------------+------------+ | attr_id | value | parent_id | +---------+-----------------+------------+ | 20 | SE | 21 | | 21 | M Sport | 0 | .... I do not want to retrieve items in my original query where, for a pair of associated attributes, they related like attributes 20 & 21. I do want to retrieve items where: the attributes have no parent for two or more attributes they are not related (e.g. attributes 23 & 34) Example result desired, just the item ID: +------------+ | item_id | +------------+ | 2 | +------------+ How can I join against attributes from items and exclude these rows? Do I use a temporary table or can I achieve this from a single query? Thanks.

    Read the article

  • Objective-C / Cocoa: Uploading Images, Working Memory, And Storage.

    - by Finley Still
    Hello. I'm in the process of porting an application originally in java to cocoa, but I'm rewriting it to make it much better, since I prefer cocoa a lot anyway. One of the problems I had in the application, was that when you uploaded images to it, I had the images created, (as say an NSImage object) and then I just had them sitting in memory, the more I uploaded the more memory they took up, and I ended up running out of memory. My question is this: if I am going to have users upload images to this application in cocoa, how should I go about storing them? I don't just want to copy the file paths, because I want what is saved to contain the images, etc. Is there any way to upload an image and copy it into a different place only for my application? Then load that image with the new path name as needed? Only I would like it all to be consolidated. I'm going to implement saving by archiving one "master" object into an NSData*- so I'd like the images to be saved with that. Is there a temporary location maybe where I could write the images to disk for my application, and then when I saved, they would all be archived into a single file? Also, how do I do this? Thanks.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >