Search Results

Search found 19603 results on 785 pages for 'variable length'.

Page 289/785 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Converted some PHP functions to c# but getting different results

    - by Tom Beech
    With a bit of help from people on here, i've converted the following PHP functions to C# - But I get very different results between the two and can't work out where i've gone wrong: PHP: function randomKey($amount) { $keyset = "abcdefghijklmABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; $randkey = ""; for ($i=0; $i<$amount; $i++) $randkey .= substr($keyset, rand(0, strlen($keyset)-1), 1); return $randkey; } public static function hashPassword($password) { $salt = self::randomKey(self::SALTLEN); $site = new Sites(); $s = $site->get(); return self::hashSHA1($s->siteseed.$password.$salt.$s->siteseed).$salt; } c# public static string randomKey(int amount) { string keyset = "abcdefghijklmABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; string randkey = string.Empty; Random random = new Random(); for (int i = 0; i < amount; i++) { randkey += keyset.Substring(0, random.Next(2, keyset.Length - 2)); } return randkey; } static string hashPassword(string password) { string salt = randomKey(4); string siteSeed = "6facef08253c4e3a709e17d9ff4ba197"; return CalculateSHA1(siteSeed + password + salt + siteSeed) + siteSeed; } static string CalculateSHA1(string ipString) { SHA1 sha1 = new SHA1CryptoServiceProvider(); byte[] ipBytes = Encoding.Default.GetBytes(ipString.ToCharArray()); byte[] opBytes = sha1.ComputeHash(ipBytes); StringBuilder stringBuilder = new StringBuilder(40); for (int i = 0; i < opBytes.Length; i++) { stringBuilder.Append(opBytes[i].ToString("x2")); } return stringBuilder.ToString(); } EDIT The string 'password' in the PHP function comes out as "d899d91adf31e0b37e7b99c5d2316ed3f6a999443OZl" in the c# it comes out as: "905d25819d950cf73f629fc346c485c819a3094a6facef08253c4e3a709e17d9ff4ba197"

    Read the article

  • In languages which create a new scope each time in a loop block, a new local copy of the local loop

    - by Jian Lin
    It seems that in language like C, Java, and Ruby (as opposed to Javascript), a new scope is created for each iteration of a loop block, and the local variable defined for the loop is actually made into a local variable every single time and recorded in this new scope? For example, in Ruby: p RUBY_VERSION $foo = [] (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end the print out is: [MacBook01:~] $ ruby scope.rb "1.8.6" 1 2 3 4 5 [MacBook01:~] $ So, it looks like when a new scope is created, a new local copy of i is also created and recorded in this new scope, so that when the function is executed at a later time, the "i" is found in those scope chains as 1, 2, 3, 4, 5 respectively. Is this true? (It sounds like a heavy operation). Contrast that with p RUBY_VERSION $foo = [] i = 0 (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end This time, the i is defined before entering the loop, so Ruby 1.8.6 will not put this i in the new scope created for the loop block, and therefore when the i is looked up in the scope chain, it always refer to the i that was in the outside scope, and give 5 every time: [MacBook01:~] $ ruby scope2.rb "1.8.6" 5 5 5 5 5 [MacBook01:~] $ I heard that in Ruby 1.9, i will be treated as a local defined for the loop even when there is an i defined earlier? The operation of creating a new scope, creating a new local copy of i each time through the loop seems heavy, as it seems it wouldn't have matter if we are not invoking the functions at a later time. So when the functions don't need to be invoked at a later time, could the interpreter and the compiler to C / Java try to optimize it so that there is not local copy of i each time?

    Read the article

  • How to send a JSONObject to a REST service?

    - by Sebi
    Retrieving data from the REST Server works well, but if I want to post an object it doesn't work: public static void postJSONObject(int store_type, FavoriteItem favorite, String token, String objectName) { String url = ""; switch(store_type) { case STORE_PROJECT: url = URL_STORE_PROJECT_PART1 + token + URL_STORE_PROJECT_PART2; //data = favorite.getAsJSONObject(); break; } HttpClient httpClient = new DefaultHttpClient(); HttpPost postMethod = new HttpPost(url); try { HttpEntity entity = new StringEntity("{\"ID\":0,\"Name\":\"Mein Projekt10\"}"); postMethod.setEntity(entity); HttpResponse response = httpClient.execute(postMethod); Log.i("JSONStore", "Post request, to URL: " + url); System.out.println("Status code: " + response.getStatusLine().getStatusCode()); } catch (ClientProtocolException e) { I always get a 400 Error Code. Does anybody know whats wrong? I have working C# code, but I can't convert: System.Net.WebRequest wr = System.Net.HttpWebRequest.Create("http://localhost:51273/WSUser.svc/pak3omxtEuLrzHSUSbQP/project"); wr.Method = "POST"; string data = "{\"ID\":1,\"Name\":\"Mein Projekt\"}"; byte [] d = UTF8Encoding.UTF8.GetBytes(data); wr.ContentLength = d.Length; wr.ContentType = "application/json"; wr.GetRequestStream().Write(d, 0, d.Length); System.Net.WebResponse wresp = wr.GetResponse(); System.IO.StreamReader sr = new System.IO.StreamReader(wresp.GetResponseStream()); string line = sr.ReadToEnd();

    Read the article

  • Jquery autocomplete webservices - what am i doing wrong??

    - by dzajdol
    I created a class for JSON responses: public class PostCodeJson { public String Text { get; private set; } public String Value { get; private set; } #region Constructors /// <summary> /// Empty constructor /// </summary> public PostCodeJson() { this.Text = String.Empty; this.Value = String.Empty; } /// <summary> /// Constructor /// </summary> /// <param name="_text"></param> /// <param name="_value"></param> public PostCodeJson(String _text, String _value) { this.Text = _text; this.Value = _value; } #endregion Constructors } and function returns list of this class using in webservices method: [WebMethod] public List<PostCodeJson> GetPostCodesCompletionListJson(String prefixText, Int32 count) { return LibDataAccess.DBServices.PostCodes.GetPostCodeJson(prefixText, count); } And in aspx i do this that: <script> $(document).ready(function() { $("#<%=pc.ClientID %>").autocomplete( baseUrl + "WebServices/Autocomplete.asmx/GetPostCodesCompletionListJson", { parse: function(data) { var array = new Array(); for (var i = 0; i < data.length; i++) { var datum = data[i]; var name = datum.Text; var display = name; array[array.length] = { data: datum, value: display, result: datum.Value }; } return array; }, dataType: "xml" }); }); </script> and when you enter something in the box i got an error: Request format is unrecognized for URL unexpectedly ending in '/GetPostCodesCompletionListJson What am I doing wrong??

    Read the article

  • Handling Errors in PHP

    - by Mike
    I have a custom class that, when called, will redirect to a page and send a 'message_type' and 'message' variable via GET. When the page opens it checks for these variables and displays a 'success', 'warning', or 'error' message depending on the 'message_type' variable. I made it so the user thinks they stay on the same page. It also allows for other variables to be passed along with the message. Is this good practice, or should I just start using exceptions? Example: //Call a static function that will redirect to a page, with an error message RedirectWithMessage::go('somepage.php', MessageType::ERROR, 'Error message here.'); The following checkMessage() function is an include file: function checkMessage() { if((isset($_GET['message_type']) && strlen($_GET['message_type'])) && (isset($_GET['message']) && strlen($_GET['message_type']))) { DisplayMessage::display($_GET['message_type'], $_GET['message']); return true; } return false; } On the page that is redirected to, call checkMessage(); //If a message is received, display it. If not, do nothing checkMessage(); I know this might be vague, and I can supply more code if necessary. I guess the issue is that I don't have much experience using exceptions, but I think they seem cumbersome (writing try-catch blocks everywhere). Please let me know if I am making this more difficult for myself or if there is a better solution. Thanks! Mike

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: class Array def shuffle ret = dup j = length i = 0 while j > 1 r = i + rand(j) ret[i], ret[r] = ret[r], ret[i] i += 1 j -= 1 end ret end end (0..9).to_a.shuffle.each{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle is used to efficiently provide random ordering. My problem is that shuffle needs to operate on an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. Imagine replacing (0..9) with (0..99**99). This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do? [Edit1]: Why do I want to do this? I'm trying to exhaust the search space of a hash algorithm for a N-length input string looking for partial collisions. Each number I generate is equivalent to a unique input string, entropy and all. Basically, I'm "counting" using a custom alphabet. [Edit2]: This means that f(x) in the above examples is a method that generates a hash and compares it to a constant, target hash for partial collisions. I do not need to store the value of x after I call f(x) so memory should remain constant over time. [Edit3/4/5/6]: Further clarification/fixes.

    Read the article

  • rpy2: Converting a data.frame to a numpy array

    - by Mike Dewar
    I have a data.frame in R. It contains a lot of data : gene expression levels from many (125) arrays. I'd like the data in Python, due mostly to my incompetence in R and the fact that this was supposed to be a 30 minute job. I would like the following code to work. To understand this code, know that the variable path contains the full path to my data set which, when loaded, gives me a variable called immgen. Know that immgen is an object (a Bioconductor ExpressionSet object) and that exprs(immgen) returns a data frame with 125 columns (experiments) and tens of thousands of rows (named genes). robjects.r("load('%s')"%path) # loads immgen e = robjects.r['data.frame']("exprs(immgen)") expression_data = np.array(e) This code runs, but expression_data is simply array([[1]]). I'm pretty sure that e doesn't represent the data frame generated by exprs() due to things like: In [40]: e._get_ncol() Out[40]: 1 In [41]: e._get_nrow() Out[41]: 1 But then again who knows? Even if e did represent my data.frame, that it doesn't convert straight to an array would be fair enough - a data frame has more in it than an array (rownames and colnames) and so maybe life shouldn't be this easy. However I still can't work out how to perform the conversion. The documentation is a bit too terse for me, though my limited understanding of the headings in the docs implies that this should be possible. Anyone any thoughts?

    Read the article

  • How to download .txt file from a url?

    - by Colin Roe
    I produced a text file and is saved to a location in the project folder. How do I redirect them to the url that contains that text file, so they can download the text file. CreateCSVFile creates the csv file to a file path based on a datatable. Calling: string pth = ("C:\\Work\\PG\\AI Handheld Website\\AI Handheld Website\\Reports\\Files\\report.txt"); CreateCSVFile(data, pth); And the function: public void CreateCSVFile(DataTable dt, string strFilePath) { StreamWriter sw = new StreamWriter(strFilePath, false); int iColCount = dt.Columns.Count; for (int i = 0; i < iColCount; i++) { sw.Write(dt.Columns[i]); if (i < iColCount - 1) { sw.Write(","); } } sw.Write(sw.NewLine); // Now write all the rows. foreach (DataRow dr in dt.Rows) { for (int i = 0; i < iColCount; i++) { if (!Convert.IsDBNull(dr[i])) { sw.Write(dr[i].ToString()); } if (i < iColCount - 1) { sw.Write(","); } } sw.Write(sw.NewLine); } sw.Close(); Response.WriteFile(strFilePath); FileInfo fileInfo = new FileInfo(strFilePath); if (fileInfo.Exists) { //Response.Clear(); //Response.AddHeader("Content-Disposition", "attachment; filename=" + fileInfo.Name); //Response.AddHeader("Content-Length", fileInfo.Length.ToString()); //Response.ContentType = "application/octet-stream"; //Response.Flush(); //Response.TransmitFile(fileInfo.FullName); } }

    Read the article

  • Iterating thorough JSON with jQuery/Javascript

    - by Aaron Salazar
    I am trying to fill a table with JSON data. When I run the following script I get only the last entry of 10. I must have to do some sort of .append() or something. I've tried to put this in but it just returns nothing. $(function() { $('#ReportedIssue').change(function() { $.getJSON('/CurReport/GetUpdatedTableResults', function(json) { //alert(json.GetDocumentResults.length); for (var i = 0; i < json.GetDocumentResults.length; i++) { $('#DocumentInfoTable').html( "<tr>" + "<td>" + json.GetDocumentResults[i].Document.DocumentId + "</td>" + "<td>" + json.GetDocumentResults[i].Document.LanguageCode + "</td>" + "<td>" + json.GetDocumentResults[i].ReportedIssue + "</td>" + "<td>" + json.GetDocumentResults[i].PageNumber + "</td>" + "</tr>" ) }; }); }); }); Thank you, Aaron

    Read the article

  • Problem with writing a hexadecimal string

    - by quilby
    Here is my code /* gcc -c -Wall -g main.c gcc -g -lm -o main main.o */ #include <stdlib.h> #include <stdio.h> #include <string.h> void stringToHex(const char* string, char* hex) { int i = 0; for(i = 0; i < strlen(string)/2; i++) { printf("s%x", string[2*i]); //for debugging sprintf(&hex[i], "%x", string[2*i]); printf("h%x\n", hex[i]); //for debugging } } void writeHex(char* hex, int length, FILE* file, long position) { fseek(file, position, SEEK_SET); fwrite(hex, sizeof(char), length, file); } int main(int argc, char** argv) { FILE* pic = fopen("hi.bmp", "w+b"); const char* string = "f2"; char hex[strlen(string)/2]; stringToHex(string, hex); writeHex(hex, strlen(string)/2, pic, 0); fclose(pic); return 0; } I want it to save the hexadecimal number 0xf2 to a file (later I will have to write bigger/longer numbers though). The program prints out - s66h36 And when I use hexedit to view the file I see the number '36' in it. Why is my code not working? Thanks!

    Read the article

  • FIFOs implementation

    - by nunos
    Consider the following code: writer.c mkfifo("/tmp/myfifo", 0660); int fd = open("/tmp/myfifo", O_WRONLY); char *foo, *bar; ... write(fd, foo, strlen(foo)*sizeof(char)); write(fd, bar, strlen(bar)*sizeof(char)); reader.c int fd = open("/tmp/myfifo", O_RDONLY); char buf[100]; read(fd, buf, ??); My question is: Since it's not know before hand how many bytes will foo and bar have, how can I know how many bytes to read from reader.c? Because if I, for example, read 10 bytes in reader and foo and bar are together less than 10 bytes, I will have them both in the same variable and that I do not want. Ideally I would have one read function for every variable, but again I don't know before hand how many bytes will the data have. I thought about adding another write instruction in writer.c between the write for foo and bar with a separator and then I would have no problem decoding it from reader.c. Is this the way to go about it? Thanks.

    Read the article

  • DataTable won't DataBind with a DataTable.NewRow()

    - by David
    Is DataRow.NewRow() insufficient as the only row in a DataTable? I would expect this to work, but it doesn't. It's near the end of my Page_Load inside my If(!Postback) block. gridCPCP is GridView DataTable dt = new DataTable(); dt.Columns.Add("ID", int.MinValue.GetType()); dt.Columns.Add("Code", string.Empty.GetType()); dt.Columns.Add("Date", DateTime.MinValue.GetType()); dt.Columns.Add("Date2", DateTime.MinValue.GetType()); dt.Columns.Add("Filename", string.Empty.GetType()); //code to add rows if (dt.Rows.Count > 0) { gridCPCP.DataSource = dt; gridCPCP.DataBind(); } else { dt.Rows.Add(dt.NewRow()); gridCPCP.DataSource = dt; gridCPCP.DataBind(); //EXCEPTION int TotalColumns = gridCPCP.Rows[0].Cells.Count; gridCPCP.Rows[0].Cells.Clear(); gridCPCP.Rows[0].Cells.Add(new TableCell()); gridCPCP.Rows[0].Cells[0].ColumnSpan = TotalColumns; gridCPCP.Rows[0].Cells[0].Text = "No Record Found"; } The exception throws on gridCPCP.DataBind() and only when execution reaches the else block. If there were rows added above via dt.Rows.Add(new object[] { ... } binding works. System.ArgumentOutOfRangeException: Length cannot be less than zero. Parameter name: length

    Read the article

  • How to program critical section for reader-writer systems?

    - by Srinivas Nayak
    Hi, Lets say, I have a reader-writer system where reader and writer are concurrently running. 'a' and 'b' are two shared variables, which are related to each other, so modification to them needs to be an atomic operation. A reader-writer system can be of the following types: rr ww r-w r-ww rr-w rr-ww where [ r : single reader rr: multiple reader w : single writer ww: multiple writer ] Now, We can have a read method for a reader and a write method for a writer as follows. I have written them system type wise. rr read_method { read a; read b; } ww write_method { lock(m); write a; write b; unlock(m); } r-w r-ww rr-w rr-ww read_method { lock(m); read a; read b; unlock(m); } write_method { lock(m); write a; write b; unlock(m); } For multiple reader system, shared variable access doesn't need to be atomic. For multiple writer system, shared variable access need to be atomic, so locked with 'm'. But, for system types 3 to 6, is my read_method and write_method correct? How can I improve? Sincerely, Srinivas Nayak

    Read the article

  • Java Scanner won't follow file

    - by Steve Renyolds
    Trying to tail / parse some log files. Entries start with a date then can span many lines. This works, but does not ever see new entries to file. File inputFile = new File("C:/test.txt"); InputStream is = new FileInputStream(inputFile); InputStream bis = new BufferedInputStream(is); //bis.skip(inputFile.length()); Scanner src = new Scanner(bis); src.useDelimiter("\n2010-05-01 "); while (true) { while(src.hasNext()){ System.out.println("[ " + src.next() + " ]"); } } Doesn't seem like Scanner's next() or hasNext() detects new entries to file. Any idea how else I can implement, basically, a tail -f with custom delimiter. ok - using Kelly's advise i'm checking & refreshing the scanner, this works. Thank you !! if anyone has improvement suggestions plz do! File inputFile = new File("C:/test.txt"); InputStream is = new FileInputStream(inputFile); InputStream bis = new BufferedInputStream(is); //bis.skip(inputFile.length()); Scanner src = new Scanner(bis); src.useDelimiter("\n2010-05-01 "); while (true) { while(src.hasNext()){ System.out.println("[ " + src.next() + " ]"); } Thread.sleep(50); if(bis.available() > 0){ src = new Scanner(bis); src.useDelimiter("\n2010-05-01 "); } }

    Read the article

  • how to login to criaglist through c#

    - by kosikiza
    i m using the following code to login to criaglist, but hav't successsed yet. string formParams = string.Format("inputEmailHandle={0}&inputPassword={1}", "[email protected]", "pakistan01"); //string postData = "[email protected]&inputPassword=pakistan01"; string uri = "https://accounts.craigslist.org/"; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri); request.KeepAlive = true; request.ProtocolVersion = HttpVersion.Version10; request.Method = "POST"; byte[] postBytes = Encoding.ASCII.GetBytes(formParams); request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = postBytes.Length; Stream requestStream = request.GetRequestStream(); requestStream.Write(postBytes, 0, postBytes.Length); requestStream.Close(); HttpWebResponse response = (HttpWebResponse)request.GetResponse(); cookyHeader = response.Headers["Set-cookie"]; string pageSource; string getUrl = "https://post.craigslist.org/del"; WebRequest getRequest = WebRequest.Create(getUrl); getRequest.Headers.Add("Cookie", cookyHeader); WebResponse getResponse = getRequest.GetResponse(); using (StreamReader sr = new StreamReader(getResponse.GetResponseStream())) { pageSource = sr.ReadToEnd(); }

    Read the article

  • Execute less compiler anywhere on the computer

    - by Xenioz
    I'm having a little problem with executing *.cmd files so I can execute them anywhere on the computer with cmd. What I exactly want is to execute the less.cmd file, which support optional arguments and uses lessc.wsf (Less.js compiler for Windows Script Host) for converting less css to pure css. The less.cmd contains: ::For convenience @cscript //nologo "%~dp0lessc.wsf" %* What I've done so far: added absolute path to lessc.cmd to the PATH system variable and moved .cmd in the PATHTEXT system variable to the beginning. Also did this: From a command prompt; assoc .bat should return with ..bat=batfile If not assoc .bat=batfile to restore the default file type association. ftype batfile should return with batfile="%1" %* If not ftype batfile="%1" %* to restore the default "Open" action for the file type. This still doesn't work unless I approach the cmd file with a absolute path in cmd, if I enter lessc anywhere else then I get C:\Intel Intel is not recognized as an internal or external command, operable program or batch file. , I've restarted my computer more than once to be sure changes will take effect. I hope somebody has the answer.

    Read the article

  • Can't empty JS array....

    - by solefald
    Yes, I am having issues with this very basic (or so it seems) thing. I am pretty new to JS and still trying to get my head around it, but I am pretty familiar with PHP and have never experienced anything like this. I just can not empty this damn array, and stuff keeps getting added to the end every time i run this. I have no idea why, and i am starting to think that it is somehow related to the way chekbox id's are named, but i may be mistaking.... id="alias[1321-213]", id="alias[1128-397]", id="alias[77-5467]" and so on. I have tried sticking checkboxes = []; and checkboxes.length = 0; In every place possible. Right after the beginning of the function, at the end of the function, even outside, right before the function, but it does not help, and the only way to empty this array is to reload the page. Please tell me what I am doing wrong, or at least point me to a place where i can RTFM. I am completely out of ideas here. function() { var checkboxes = new Array(); checkboxes = $(':input[name="checkbox"]'); $.each(checkboxes, function(key, value) { console.log(value.id); alert(value.id); } ); checkboxes.length = 0; } I have also read Mastering Javascript Arrays 3 times to make sure I am not doing something wrong, but still can't figure it out....

    Read the article

  • Merging datasets with 2 different time variables in SAS

    - by John
    Hye Guys, for those regularly browsing this site sorry for already another question (however I did solve my last question myself!) I have another problem with merging datasets, it seems that accounting for time in datasets is a real pain in the ass. I succesfully managed to merge on months in my previous datasets, however it seems I have a final dataset which only has quarter as a time count variable. So where all my normal databases have month 1- xxx as an indicator of time, this database had quarter as an indicator of time. I still want to add the variables of this last database, let's call it TVOL, into my WORK database. Quick summary QUARTER: Quarter 0 = JAN1996-MAR1996 Month: Month 0 = JAN1996 Example: TVOL TVOL _ Ticker ____ Quarter 1500 _ AA ________ -1 52546 _ BB ________ 15 Example: WORK BETA _ Ticker ____ Month 1.52 _ AA ________ 2 1.54__ BB _______ 3 Example: Merged: BETA ______ TVOL __ Ticker ____ Month 1.52 _______ 500 ___ AA _______ 2 I now want to merge this 2 tables using following relationship if the month is in quarter 1, the data of quarter 0 has to be used, so if i have an observation i nWORK with date 2FEB1996 the TVOL of quarter -1 has to be put behind this observation. Something like IF month = quarter i use data quarter i-1. Also, as TVOL is measured quarterly and I have to put in monthly I have to take the average, so (TVOL/3) should be added as a variable. Thanks!

    Read the article

  • Creating a custom Configuration

    - by Rob
    I created a customer configuration class Reports. I then created another class called "ReportsCollection". When I try and do the "ConfigurationManager.GetSection()", it doesn't populate my collection variable. Can anyone see any mistakes in my code? Here is the collection class: public class ReportsCollection : ConfigurationElementCollection { public ReportsCollection() { } protected override ConfigurationElement CreateNewElement() { throw new NotImplementedException(); } protected override ConfigurationElement CreateNewElement(string elementName) { return base.CreateNewElement(elementName); } protected override object GetElementKey(ConfigurationElement element) { throw new NotImplementedException(); } public Report this[int index] { get { return (Report)BaseGet(index); } } } Here is the reports class: public class Report : ConfigurationSection { [ConfigurationProperty("reportName", IsRequired = true)] public string ReportName { get { return (string)this["reportName"]; } //set { this["reportName"] = value; } } [ConfigurationProperty("storedProcedures", IsRequired = true)] public StoredProceduresCollection StoredProcedures { get { return (StoredProceduresCollection)this["storedProcedures"]; } } [ConfigurationProperty("parameters", IsRequired = false)] public ParametersCollection Parameters { get { return (ParametersCollection)this["parameters"]; } } [ConfigurationProperty("saveLocation", IsRequired = true)] public string SaveLocation { get { return (string)this["saveLocation"]; } } [ConfigurationProperty("recipients", IsRequired = true)] public RecipientsCollection Recipients { get { return (RecipientsCollection)this["recipients"]; } } } public class StoredProcedure : ConfigurationElement { [ConfigurationProperty("storedProcedureName", IsRequired = true)] public string StoredProcedureName { get { return (string)this["storedProcedureName"]; } } } public class StoredProceduresCollection : ConfigurationElementCollection { protected override ConfigurationElement CreateNewElement() { throw new NotImplementedException(); } protected override ConfigurationElement CreateNewElement(string elementName) { return base.CreateNewElement(elementName); } protected override object GetElementKey(ConfigurationElement element) { throw new NotImplementedException(); } public StoredProcedure this[int index] { get { return (StoredProcedure)base.BaseGet(index); } } } } And here is the very straight-forward code to create the variable: ReportsCollection reportsCollection = (ReportsCollection) System.Configuration.ConfigurationManager.GetSection("ReportGroup");

    Read the article

  • memory alignment within gcc structs

    - by Mumbles
    I am porting an application to an ARM platform in C, the application also runs on an x86 processor, and must be backward compatible. I am now having some issues with variable alignment. I have read the gcc manual for __attribute__((aligned(4),packed)) I interpret what is being said as the start of the struct is aligned to the 4 byte boundry and the inside remains untouched because of the packed statement. originally I had this but occasionally it gets placed unaligned with the 4 byte boundary. typedef struct { unsigned int code; unsigned int length; unsigned int seq; unsigned int request; unsigned char nonce[16]; unsigned short crc; } __attribute__((packed)) CHALLENGE; so I change it to this. typedef struct { unsigned int code; unsigned int length; unsigned int seq; unsigned int request; unsigned char nonce[16]; unsigned short crc; } __attribute__((aligned(4),packed)) CHALLENGE; The understand I stated earlier seems to be incorrect as both the struct is now aligned to a 4 byte boundary, and and the inside data is now aligned to a four byte boundary, but because of the endianess, the size of the struct has increased in size from 42 to 44 bytes. This size is critical as we have other applications that depend on the struct being 42 bytes. Could some describe to me how to perform the operation that I require. Any help is much appreciated.

    Read the article

  • IIS6 compressing static files, but not JSON requests

    - by user500038
    I have IIS6 compression setup, and static content is being compressed correctly as per Coding Horror. Example of a static file: Response Headers Content-Length 55513 Content-Type application/x-javascript Content-Encoding gzip Last-Modified Mon, 20 Dec 2010 15:31:58 GMT Accept-Ranges bytes Vary Accept-Encoding Server Microsoft-IIS/6.0 X-Powered-By ASP.NET Date Wed, 29 Dec 2010 16:37:23 GMT Request Headers Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive As you can see, the response is correctly compressed. However with a JSON call you'll see the request with the gzip parameter correctly set: Accept application/json, text/javascript, */*; q=0.01 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive X-Requested-With XMLHttpRequest Then for the response: Cache-Control private Content-Length 6811 Content-Type application/json; charset=utf-8 Server Microsoft-IIS/6.0 X-Powered-By ASP.NET X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 No gzip! I found this article by Rick Stahl, which outlines how to compress in your code, but I'd like IIS to handle this. Is this possible with IIS6?

    Read the article

  • Why is setting HTML5's CanvasPixelArray values is ridiculously slow and how can I do it faster?

    - by Nixuz
    I am trying to do some dynamic visual effects using the HTML 5 canvas' pixel manipulation, but I am running into a problem where setting pixels in the CanvasPixelArray is ridiculously slow. For example if I have code like: imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ imageData.data[index] = buffer[i]; imageData.data[index + 1] = buffer[i]; imageData.data[index + 2] = buffer[i]; } ctx.putImageData(imageData, 0, 0); Profiling with Chrome reveals, it runs 44% slower than the following code where CanvasPixelArray is not used. tempArray = new Array(500 * 500 * 4); imageData = ctx.getImageData(0, 0, 500, 500); for (var i = 0; i < imageData.length; i += 4){ tempArray[index] = buffer[i]; tempArray[index + 1] = buffer[i]; tempArray[index + 2] = buffer[i]; } ctx.putImageData(imageData, 0, 0); My guess is that the reason for this slowdown is due to the conversion between the Javascript doubles and the internal unsigned 8bit integers, used by the CanvasPixelArray. Is this guess correct? Is there anyway to reduce the time spent setting values in the CanvasPixelArray?

    Read the article

  • Total Order between !different! volatile variables?

    - by andreas
    Hi all, Consider the following Java code: volatile boolean v1 = false; volatile boolean v2 = false; //Thread A v1 = true; if (v2) System.out.println("v2 was true"); //Thread B v2 = true; if (v1) System.out.println("v1 was true"); If there was a globally visible total order for volatile accesses then at least one println would always be reached. Is that actually guaranteed by the Java Standard? Or is an execution like this possible: A: v1 = true; B: v2 = true; A: read v2 = false; B: read v1 = false; A: v2 = true becomes visible (after the if) B: v1 = true becomes visible (after the if) I could only find statements about accesses to the same volatile variable in the Standard (but I might be missing something). "A write to a volatile variable (§8.3.1.4) v synchronizes-with all subsequent reads of v by any thread (where subsequent is defined according to the synchronization order)." http://java.sun.com/docs/books/jls/third_edition/html/memory.html#17.4.4 Thanks!

    Read the article

  • Trying to modify the text of a document onkeydown is inserting text incorrectly.

    - by Benny
    I have the following html: <html> <head> <script> function myKeyDown() { var myDiv = document.getElementById('myDiv'); myDiv.innerHTML = myDiv.innerHTML.replace(/(@[a-z0-9_]+)/gi, '<strong>$1</strong>'); } function init() { document.addEventListener('keydown', myKeyDown, false); document.designMode = "on"; } window.onload = init; </script> </head> <body> <div id="myDiv"> This is my variable name: @varname. If I type here things go wrong... </div> </body> </html> My goal is to do a kind of syntax highlighting on edit, to highlight variable names that begin with an @ symbol. However, when I edit the document's body, the function runs but the cursor is automatically shifted to the beginning of the body before the keystroke is performed. My hypothesis is that the keypress event is trying to insert the new character at a specified index, but when I run the replace function the indices get messed up so it defaults the character insertion point to the beginning. I'm using Firefox to test by the way. Any help would be appreciated. Thanks, B.J.

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >