Search Results

Search found 19279 results on 772 pages for 'everything'.

Page 618/772 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • How to detect if a file is PDF or TIFF ?

    - by eviljack
    Please bear with me as I've been thrown into the middle of this project without knowing all the background. If you've got WTF questions, trust me, I have them too. Here is the scenario: I've got a bunch of files residing on an IIS server. They have no file extension on them. Just naked files with names like "asda-2342-sd3rs-asd24-ut57" and so on. Nothing intuitive. The problem is I need to serve up files on an ASP.NET (2.0) page and display the tiff files as tiff and the PDF files as PDF. Unfortunately I don't know which is which and I need to be able to display them appropriately in their respective formats. For example, lets say that there are 2 files I need to display, one is tiff and one is PDF. The page should show up with a tiff image, and perhaps a link that would open up the PDF in a new tab/window. The problem: As these files are all extension-less I had to force IIS to just serve everything up as TIFF. But if I do this, the PDF files won't display. I could change IIS to force the MIME type to be PDF for unknown file extensions but I'd have the reverse problem. http://support.microsoft.com/kb/326965 Is this problem easier than I think or is it as nasty as I am expecting?

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Obfuscating ASP.Net dll breaks web application.

    - by uriDium
    I wouldn't usually bother to obfuscate a web application DLL but right now I have to share some server space with someone who might have a conflict of interest and might be tempted to steal the deal and decompile it. Not an ideal solution I know but hey. So I am using VS 2005, a web deployment project (which compiles into a single DLL) and Dotfuscator community edition. When I obfuscate the DLL the web application breaks and I get some message like Could not load type 'Browse' from assembly MyAssembly So I searched around and found that if I disable renaming then it should fix it. Which it does. But now when I look at the DLL using .Net reflector I can see everything again. So this seems kind of pointless. Is there a way to get this to work? Is there a better way to protect my DLL from someone I have to share a server with? UPDATE: I figured out my problem. All the classnames have changed but now all my <%@ Page Language="C#" AutoEventWireup="true" CodeFile="mycode.aspx.cs" Inherits="mycode" % is incorrect because mycode no longer exists. It is now aef or something. Is there any tool out there that will also change the names of the Codefile and Inherits tags?

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

  • Accessing global variable in multithreaded Tomcat server

    - by jwegan
    I have a singleton object that I construct like thus: private static volatile KeyMapper mapper = null; public static KeyMapper getMapper() { if(mapper == null) { synchronized(Utils.class) { if(mapper == null) { mapper = new LocalMemoryMapper(); } } } return mapper; } The class KeyMapper is basically a synchronized wrapper to HashMap with only two functions, one to add a mapping and one to remove a mapping. When running in Tomcat 6.24 on my 32bit Windows machine everything works fine. However when running on a 64 bit Linux machine (CentOS 5.4 with OpenJDK 1.6.0-b09) I add one mapping and print out the size of the HashMap used by KeyMapper to verify the mapping got added (i.e. verify size = 1). Then I try to retrieve the mapping with another request and I keep getting null and when I checked the size of the HashMap it was 0. I'm confident the mapping isn't accidentally being removed since I've commented out all calls to remove (and I don't use clear or any other mutators, just get and put). The requests are going through Tomcat 6.24 (configured to use 200 threads with a minimum of 4 threads) and I passed -Xnoclassgc to the jvm to ensure the class isn't inadvertently getting garbage collected (jvm is also running in -server mode). I also added a finalize method to KeyMapper to print to stderr if it ever gets garbage collected to verify that it wasn't being garbage collected. I'm at my wits end and I can't figure out why one minute the entry in HashMap is there and the next it isn't :(

    Read the article

  • XML-RPC over SSL with Ruby: end of file reached (EOFError)

    - by Michael Conigliaro
    Hello, I have some very simple Ruby code that is attempting to do XML-RPC over SSL: require 'xmlrpc/client' require 'pp' server = XMLRPC::Client.new2("https://%s:%d/" % [ 'api.ultradns.net', 8755 ]) pp server.call2('UDNS_OpenConnection', 'sponsor', 'username', 'password') The problem is that it always results in the following EOFError exception: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/protocol.rb:135:in `sysread': end of file reached (EOFError) So it appears that after doing the POST, I don't get anything back. Interestingly, this is the behavior I would expect if I tried to make an HTTP connection on the HTTPS port (or visa versa), and I actually do get the same exact exception if I change the protocol. Everything I've looked at indicates that using "https://" in the URL is enough to enable SSL, but I'm starting wonder if I've missed something. Note that Even though the credentials I'm using in the RPC are made up, I'm expecting to at least get back an XML error page (similar to if you access https://api.ultradns.net:8755/ with a web browser). I've tried running this code on OSX and Linux with the exact same result, so I have to conclude that I'm just doing something wrong here. Does anyone have any examples of doing XML-RPC over SSL with Ruby?

    Read the article

  • Python 3.3 Webserver restarting problems

    - by IPDGino
    I have made a simple webserver in python, and had some problems with it before as described here: Python (3.3) Webserver script with an interesting error In that question, the answer was to use a While True: loop so that any crashes or errors would be resolved instantly, because it would just start itself again. I've used this for a while, and still want to make the server restart itself every few minutes, but on Linux for some reason it won't work for me. On windows the code below works fine, but on linux it keeps saying Handler class up here ... ... class Server: def __init__(self): self.server_class = HTTPServer self.server_adress = ('MY IP GOES HERE, or localhost', 8080) global httpd httpd = self.server_class(self.server_adress, Handler) self.main() def main(self): if count > 1: global SERVER_UP_SINCE HOUR_CHECK = int(((count - 1) * RESTART_INTERVAL) / 60) SERVER_UPTIME = str(HOUR_CHECK) + " MINUTES" if HOUR_CHECK > 60: minutes = int(HOUR_CHECK % 60) hours = int(HOUR_CHECK // 60) SERVER_UPTIME = ("%s HOURS, %s MINUTES" % (str(hours), str(minutes))) SERVING_ON_ADDR = self.server_adress SERVER_UP_SINCE = str(SERVER_UP_SINCE) SERVER_RESTART_NUMBER = count - 1 print(""" SERVER INFO ------------------------------------- SERVER_UPTIME: %s SERVER_UP_SINCE: %s TOTAL_FILES_SERVED: %d SERVING_ON_ADDR: %s SERVER_RESTART_NUMBER: %s \n\nSERVER HAS RESTARTED """ % (SERVER_UPTIME, SERVER_UP_SINCE, TOTAL_FILES, SERVING_ON_ADDR, SERVER_RESTART_NUMBER)) else: print("SERVER_BOOT=1\nSERVER_ONLINE=TRUE\nRESTART_LOOP=TRUE\nSERVING_ON_ADDR:%s" % str(self.server_adress)) while True: try: httpd.serve_forever() except KeyboardInterrupt: print("Shutting down...") break httpd.shutdown() httpd.socket.close() raise(SystemExit) return def server_restart(): """If you want the restart timer to be longer, replace the number after the RESTART_INTERVAL variable""" global RESTART_INTERVAL RESTART_INTERVAL = 10 threading.Timer(RESTART_INTERVAL, server_restart).start() global count count = count + 1 instance = Server() if __name__ == "__main__": global SERVER_UP_SINCE SERVER_UP_SINCE = strftime("%d-%m-%Y %H:%M:%S", gmtime()) server_restart() Basically, I make a thread to restart it every 10 seconds (For testing purposes) and start the server. After ten seconds it will say File "/home/username/Desktop/Webserver/server.py", line 199, in __init__ httpd = self.server_class(self.server_adress, Handler) File "/usr/lib/python3.3/socketserver.py", line 430, in __init__ self.server_bind() File "/usr/lib/python3.3/http/server.py", line 135, in server_bind socketserver.TCPServer.server_bind(self) File "/usr/lib/python3.3/socketserver.py", line 441, in server_bind self.socket.bind(self.server_address) OSError: [Errno 98] Address already in use As you can see in the except KeyboardInterruption line, I tried everything to make the server stop, and the program stop, but it will NOT stop. But the thing I really want to know is how to make this server able to restart, without giving some wonky errors.

    Read the article

  • Making a simple searchable directory of people and their skills in a day - Which technologies?

    - by gav
    Hi All, I am working with a small theatre company. Currently they have a list of people on paper with notes about their skills next to each one. I want to create a database / directory for them so that they can add, delete, update and search for people. It is a very simple and common scenario I know but the issue here is that I only have a day to build a working solution. The search has to be very simple At first I was thinking LAMP but I'd rather not have to create it all from scratch and host it myself. That lead me to Google Spreadsheet as a database, this has the advantage that they already use google docs for everything and if my front end goes tits up they can still get to the data. Presuming none of you can think of some existing software which does exactly what I want the next step is to make a front end for the database. You can create forms for Google Spreadsheets but they only let you add new entries, I can make a Google Gadget but that will only let me implement the search as the Google Visualisation API provides read only access. It's at this point I'm stuck, should I just create a Java Servlet front end for the Google Spreadsheet and use the Java API to add, search and update? I know this is a broad question but I'm just asking 'What would you do?' to implement this system with a day's development time? Gav

    Read the article

  • confused about how to use JSON in C#

    - by Josh
    The answer to just about every single question about using C# with json seems to be "use JSON.NET" but that's not the answer I'm looking for. the reason I say that is, from everything I've been able to read in the documentation, JSON.NET is basically just a better performing version of the DataContractSerializer built into the .net framework... Which means if I want to deserialize a JSON string, I have to define the full, strongly-typed class for EVERY request I might have. so if I have a need to get categories, posts, authors, tags, etc, I have to define a new class for every one of these things. This is fine if I built the client and know exactly what the fields are, but I'm using someone else's api, so I have no idea what the contract is unless I download a sample response string and create the class manually from the JSON string. Is that the only way it's done? Is there not a way to have it create a kind of hashtable that can be read with json["propertyname"]? Finally, if I do have to build the classes myself, what happens when the API changes and they don't tell me (as twitter seems to be notorious for doing)? I'm guessing my entire project will break until I go in and update the object properties... So what exactly is the general workflow when working with JSON? And by general I mean library-agnostic. I want to know how it's done in general, not specifically to a target library... I hope that made sense, this has been a very confusing area to get into... thanks!

    Read the article

  • Alloy MVC Framework Titanium Network (Model)

    - by flyingDuck
    I'm trying to authenticate using the Model in Alloy. I have been trying to figure this problem out since yesterday. If anybody could help me, I'd really appreciate it. So, I have a view login.xml, then a controller login.js. The login.js contains the following function: var user = Alloy.Models.user; //my user.js model function login(e) { if($.username.value !== '' && $.password.value !== ''){ if(user.login($.username.value, $.password.value)){ Alloy.createController('home').getView().open(); $.login.close(); } }else{ alert('Username and/or Password required!'); } } Then in my user.js model, it's like this: extendModel : function(Model) { _.extend(Model.prototype, { login: function(username, password) { var first_name, last_name, email; var _this = this; var url = 'http://myurl.com/test.php'; var auth = Ti.Network.createHTTPClient({ onerror: function(e){ alert(e.error); }, onload: function(){ var json = this.responseText; var response = JSON.parse(json); if(response.logged == true){ first_name = response.f_name; last_name = response.l_name; email = response.email; _this.set({ loggedIn: 1, username: email, realname: first_name + ' ' + last_name, email: email, }); _this.save(); }else{ alert(response.message); } }, }); auth.open('POST', url); var params = { usernames: username, passwords: password, }; auth.send(params); alert(_this.get('email')); //alert email }, }); When I click on login in login.xml it calls the function login in index.js. So, now my problem is that, when I click the button for the first time, I get an empty alert from alert(_this.get('email')), but then when I click the button the second time, everything works fine, it alerts the email. I have no idea what's going on. Thank you for the help.

    Read the article

  • Vim: yank and replace -- the same yanked input -- multiple times, and two other questions

    - by Hassan Syed
    Now that I am using vim for everything I type, rather then just for configuring servers, I wan't to sort out the following trivialities. I tried to formulate Google search queries but the results didn't address my questions :D. Question one: How do I yank and replace multiple times ? Once I have something in the yank history (if that is what its called) and then highlight and use the 'p' char in command mode the replaced text is put at the front of the yank history; therefore subsequent replace operations do not use the the text I intended. I imagine this to be a usefull feature under certain circumstances but I do not have a need for it in my workflow. Question two: How do I type text without causing the line to ripple forward ? I use hard-tab stops to allign my code in a certain way -- e.g., FunctionNameX ( lala * land ); FunctionNameProto ( ); When I figure out what needs to go into the second function, how do I insert it without move the text up ? Question three Is there a way of having a uniform yank history across gvim instances on the same machine ? I have 1 monitors. Just wondering, atm I am using highlight + mouse middle click.

    Read the article

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • Error when passing quotes to webservice by AJAX

    - by Radu
    I'm passing data using .ajax and here are my data and contentType attributes: data: '{ "UserInput" : "' + $('#txtInput').val() + '","Options" : { "Foo1":' + bar1 + ', "Foo2":' + Bar2 + ', "Flags":"' + flags + '", "Immunity":' + immunity + '}}', contentType: 'application/json; charset=utf-8', Server side my code looks like this: <WebMethod()> _ Public Shared Function ParseData(ByVal UserInput As String, ByVal Options As Options) As String The userinput is obvious but the Options structure is like the following: Public Structure Options Dim Foo1 As Boolean Dim Foo2 As Boolean Dim Flags As String Dim Immunity As Integer End Structure Everything works fine when $('#txtInput') contains no double-quotes but if they are present I get an error (for an input of asd"): {"Message":"Invalid object passed in, \u0027:\u0027 or \u0027}\u0027 expected. (22): { \"UserInput\" : \"asd\"\",\"Options\" : { \"Foo1\":false, \"Foo2\":false, \"Flags\":\"\", \"Immunity\":0}}","StackTrace":" at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeDictionary(Int32 depth)\r\n at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n at System.Web.Script.Serialization.JavaScriptObjectDeserializer.BasicDeserialize(String input, Int32 depthLimit, JavaScriptSerializer serializer)\r\n at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize[T](String input)\r\n at System.Web.Script.Services.RestHandler.ExecuteWebServiceCall(HttpContext context, WebServiceMethodData methodData)","ExceptionType":"System.ArgumentException"} Any idea how I can avoid this error? Also, when I pass the same input with quotes directly it works fine.

    Read the article

  • Help ! How do I get the total number rows from my SQL Server paging procedure ?

    - by The_AlienCoder
    Ok I have a table in my SQL Server database that stores comments. My desire is to be able to page though the records using [Back],[Next], page numbers & [Last] buttons in my data list. I figured the most efficient way was to use a stored procedure that only returns a certain number of rows within a particular range. Here is what I came up with @PageIndex INT, @PageSize INT, @postid int AS SET NOCOUNT ON begin WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) SELECT tmp.* FROM tmp WHERE Row between (@PageIndex - 1) * @PageSize + 1 and @PageIndex*@PageSize end RETURN Now everything works fine and I have been able implement [Next] and [Back] buttons in my data list pager. Now I need the total number of all comments (not in the current page) so that I can implement my page numbers and the[Last] button on my pager. In other words I want to return the total number of rows in my first select statement i.e WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) set @TotalRows = @@rowcount @@rowcount doesn't work and raises an error. I also cant get count.* to work either. Is there another way to get the total amount of rows or is my approach doomed.

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • Server returns 500 error only when called by Java client using urlConnection/httpUrlConnection

    - by user455889
    Hi - I'm having a very strange problem. I'm trying to call a servlet (JSP) with an HTTP GET and a few parameters (http://mydomain.com/method?param1=test&param2=123). If I call it from the browser or via WGET in a bash session, it works fine. However, when I make the exact same call in a Java client using urlConnection or httpURLConnection, the server returns a 500 error. I've tried everything I have found online including: urlConn.setRequestProperty("Accept-Language", "en-us,en;q=0.5"); Nothing I've tried, however, has worked. Unfortunately, I don't have access to the server I'm calling so I can't see the logs. Here's the latest code: private String testURLConnection() { String ret = ""; String url = "http://localhost:8080/TestService/test"; String query = "param1=value1&param2=value2"; try { URLConnection connection = new URL(url + "?" + query).openConnection(); connection.setRequestProperty("Accept-Charset", "UTF-8"); connection.setRequestProperty("Accept-Language", "en-us,en;q=0.5"); BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(connection.getInputStream())); String line; StringBuilder content = new StringBuilder(); while ((line = bufferedReader.readLine()) != null) { content.append(line + "\n"); } bufferedReader.close(); metaRet = content.toString(); log.debug(methodName + " return = " + metaRet); } catch (Exception ex) { log.error("Exception: " + ex); log.error("stack trace: " + getStackTrace(ex)); } return metaRet; } Any help would be greatly appreciated!

    Read the article

  • Create an instance of an exported C++ class from Delphi

    - by Alan G.
    I followed an excellent article by Rudy Velthuis about using C++ classes in DLL's. Everything was golden, except that I need access to some classes that do not have corresponding factories in the C++ DLL. How can I construct an instance of a class in the DLL? The classes in question are defined as class __declspec(dllexport) exampleClass { public: void foo(); }; Now without a factory, I have no clear way of instantiating the class, but I know it can be done, as I have seen SWIG scripts (.i files) that make these classes available to Python. If Python&SWIG can do it, then I presume/hope there is some way to make it happen in Delphi too. Now I don't know much about SWIG, but it seems like it generates some sort of map for C++ mangled names? Is that anywhere near right? Looking at the exports from the DLL, I suppose I could access functions & constructor/destructor by index or the mangled name directly, but that would be nasty; and would it even work? Even if I can call the constructor, how can I do the equivalent of "new CClass();" in Delphi?

    Read the article

  • C++ VB6 interfacing problem

    - by Roshan
    Hi, I'm tearing my hair out trying to solve this one, any insights will be much appreciated: I have a C++ exe which acquires data from some hardware in the main thread and processes it in another thread (thread 2). I use a c++ dll to supply some data processing functions which are called from thread 2. I have a requirement to make another set of data processing functions in VB6. I have thus created a VB6 dll, using the add-in vbAdvance to create a standard dll. When I call functions from within this VB6 dll from the main thread, everything works exactly as expected. When I call functions from this VB6 dll in thread 2, I get an access violation. I've traced the error to the CopyMemory command, it would seem that if this is used within the call from the main thread, it's fine but in a call from the process thread, it causes an exception. Why should this be so? As far as I understand, threads share the same address space. Here is the code from my VB dll Public Sub UserFunInterface(ByVal in1ptr As Long, ByVal out1ptr As Long, ByRef nsamples As Long) Dim myarray1() As Single Dim myarray2() As Single Dim i As Integer ReDim myarray1(0 To nsamples - 1) As Single ReDim myarray2(0 To nsamples - 1) As Single With tsa1din(0) ' defined as safearray1d in a global definitions module .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = in1ptr End With With tsa1dout .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = out1ptr End With CopyMemory ByVal VarPtrArray(myarray1), VarPtr(tsa1din(0)), 4 CopyMemory ByVal VarPtrArray(myarray2), VarPtr(tsa1dout), 4 For i = 0 To nsamples - 1 myarray2(i) = myarray1(i) * 2 Next i ZeroMemory ByVal VarPtrArray(myarray1), 4 ZeroMemory ByVal VarPtrArray(myarray2), 4 End Sub

    Read the article

  • Events and references pattern

    - by serhio
    In a project I have the following relation between BO and GUI By e.g. G could represent a graphic with time lines, C a TimeLine curve, P - points of that curve and T the time that represents each point. Each GUI object is associated with the BO corresponding object. When T changes GUI P captures the Changed event and changes its location. So, when G should be modified, it modifies internally its objects and as result T changes, P moves and the GuiG visually changes, everything is OK. But there is an inconvenient of this architecture... BO should not be recreated, because this will breack the link between BO and GUIO. In particular, GUI P should always have the same reference of T. If in a business logic I do by e.g. P1.T = new T(this.T + 10) GUI_P1 will not move anymore, because it wait an event from the reference of former P1.T object, that does not belongs to P1 anymore. So the solution was to always modify the existing objects, not to recreate it. But here is an other inconvenient: performance. Say I have a ready newC object that should replace the older one. Instead of doing G1.C = newC I should do foreach T in foreach P in C replace with T from P from newC. Is there an other more optimal way to do it?

    Read the article

  • best practice - loging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Any danger in calling flash messages html_safe?

    - by PreciousBodilyFluids
    I want a flash message that looks something like: "That confirmation link is invalid or expired. Click here to have a new one generated." Where "click here" is of course a link to another action in the app where a new confirmation link can be generated. Two drawbacks: One, since link_to isn't defined in the controller where the flash message is being set, I have to put the link html in myself. No big deal, but kind of messy. Number two: In order for the link to actually display properly on the page I have to html_safe the flash display function in the view, so now it looks like (using Haml): - flash.each do |name, message| = content_tag :div, message.html_safe This gives me pause. Everything else I html_safe has been HTML I've written myself in helpers and whatnot, but the contents of the flash hash are stored in a cookie client-side, and could conceivably be changed. I've thought through it, and I don't see how this could result in an XSS attack, but XSS isn't something I have a great understanding of anyway. So, two questions: 1. Is there any danger in always html_safe-ing all flash contents like this? 2. The fact that this solution is so messy (breaking MVC by using HTML in the controller, always html_safe-ing all flash contents) make me think I'm going about this wrong. Is there a more elegant, Rails-ish way to do this? I'm using Rails 3.0.0.beta3.

    Read the article

  • How to parse phpDoc style comment block with php?

    - by Reveller
    Please consider the following code with which I'm trying to parse only the first phpDoc style comment (noy using any other libraries) in a file (file contents put in $data variable for testing purposes): $data = " /** * @file A lot of info about this file * Could even continue on the next line * @author [email protected] * @version 2010-05-01 * @todo do stuff... */ /** * Comment bij functie bar() * @param Array met dingen */ function bar($baz) { echo $baz; } "; $data = trim(preg_replace('/\r?\n *\* */', ' ', $data)); preg_match_all('/@([a-z]+)\s+(.*?)\s*(?=$|@[a-z]+\s)/s', $data, $matches); $info = array_combine($matches[1], $matches[2]); print_r($info) This almose works, except for the fact that everything after @todo (including the bar() comment block and code) is considered the value of @todo: Array ( [file] => A lot of info about this file Could even continue on the next line [author] => [email protected] [version] => 2010-05-01 [todo] => do stuff... / /** Comment bij functie bar() [param] => Array met dingen / function bar() { echo ; } ) How does my code need to be altered so that only the first comment block is being parsed (in other words: parsing should stop after the first "*/" encountered?

    Read the article

  • Hierarchy of meaning

    - by asldkncvas
    I am looking for a method to build a hierarchy of words. Background: I am a "amateur" natural language processing enthusiast and right now one of the problems that I am interested in is determining the hierarchy of word semantics from a group of words. For example, if I have the set which contains a "super" representation of others, i.e. [cat, dog, monkey, animal, bird, ... ] I am interested to use any technique which would allow me to extract the word 'animal' which has the most meaningful and accurate representation of the other words inside this set. Note: they are NOT the same in meaning. cat != dog != monkey != animal BUT cat is a subset of animal and dog is a subset of animal. I know by now a lot of you will be telling me to use wordnet. Well, I will try to but I am actually interested in doing a very domain specific area which WordNet doesn't apply because: 1) Most words are not found in Wordnet 2) All the words are in another language; translation is possible but is to limited effect. another example would be: [ noise reduction, focal length, flash, functionality, .. ] so functionality includes everything in this set. I have also tried crawling wikipedia pages and applying some techniques on td-idf etc but wikipedia pages doesn't really do much either. Can someone possibly enlighten me as to what direction my research should go towards? (I could use anything)

    Read the article

  • C# - retrieve file path from config file - @ doesn't do it's magic

    - by Bart
    Hi guys, I'm currently working on a web service that retrieves an XML message, archives it and then processes it further. The archive folder is read from the Web.config. This is what the archive method looks like private void Archive(System.Xml.XmlDocument xmlDocument) { try { string directory = System.Configuration.ConfigurationManager.AppSettings.Get("ArchivePath"); ParseMessage(xmlDocument); directory = string.Format(@"{0}\{1}\{2}", @directory, _senderService, DateTime.Now.ToString("MMMyyyy")); System.IO.Directory.CreateDirectory(directory); string Id = _messageID; string senderService = _senderService; xmlDocument.Save(directory + @"\" + DateTime.Now.ToString("yyyyMMdd_") + Id + "_" + System.Guid.NewGuid().ToString().Substring(0, 13) + ".xml"); } The path structure I retrieve is C:\Program Files\Subfolder\Subfolder. In the development, QA, UAT and PRD environments everything works fine. But on another machine I now need to install the web service on (which I cannot debug, unfortunately), the directory string is 'C:Files'. Just to be sure I double checked the .NET version on the different machines (I thought perhaps the usage of @ before a string was version-dependent); all machines use 2.0.50727. Does anyone recognize this problem? Thanks in advance!

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >