Search Results

Search found 15674 results on 627 pages for 'bash date'.

Page 550/627 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • How do I define a Calculated Measure in MDX based on a Dimension Attribute?

    - by ShaneD
    I would like to create a calculated measure that sums up only a specific subset of records in my fact table based on a dimension attribute. Given: Dimension Date LedgerLineItem {Charge, Payment, Write-Off, Copay, Credit} Measures LedgerAmount Relationships * LedgerLineItem is a degenerate dimension of FactLedger If I break down LedgerAmount by LedgerLineItem.Type I can easily see how much is charged, paid, credit, etc, but when I do not break it down by LedgerLineItem.Type I cannot easily add the credit, paid, credit, etc into a pivot table. I would like to create separate calculated measures that sum only specific type (or multiple types) of ledger facts. An example of the desired output would be: | Year | Charged | Total Paid | Amount - Ledger | | 2008 | $1000 | $600 | -$400 | | 2009 | $2000 | $1500 | -$500 | | Total | $3000 | $2100 | -$900 | I have tried to create the calculated measure a couple of ways and each one works in some circumstances but not in others. Now before anyone says do this in ETL, I have already done it in ETL and it works just fine. What I am trying to do as part of learning to understand MDX better is to figure out how to duplicate what I have done in the ETL in MDX as so far I am unable to do that. Here are two attempts I have made and the problems with them. This works only when ledger type is in the pivot table. It returns the correct amount of the ledger entries (although in this case it is identical to [amount - ledger] but when I try to remove type and just get the sum of all ledger entries it returns unknown. CASE WHEN ([Ledger].[Type].currentMember = [Ledger].[Type].&[Credit]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Paid]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Held Money: Copay]) THEN [Measures].[Amount - ledger] ELSE 0 END This works only when ledger type is not in the pivot table. It always returns the total payment amount, which is incorrect when I am slicing by type as I would only expect to see the credit portion under credit, the paid portion, under paid, $0 under charge, etc. sum({([Ledger].[Type].&[Credit]), ([Ledger].[Type].&[Paid]), ([Ledger].[Type].&[Held Money: Copay])}, [Measures].[Amount - ledger]) Is there any way to make this return the correct numbers regardless of whether Ledger.Type is included in my pivot table or not?

    Read the article

  • Iterating Over Params Hash

    - by Joe Clark
    I'm having an extremely frustrating time getting some images to upload. They are obviously being uploaded as rack/multipart but the way that I'm iterating over my params hash must be causing the problem. I could REALLY use some help, so I can stop pulling out my hair. So I've got a params hash that looks like this: Parameters: {"commit"=>"Submit", "sighting_report"=>[{"number_seen"=>"1", "picture"=>#<File:/var/folders/IX/IXXrbzpCHkq68OuyY-yoI++++TI/-Tmp-/RackMultipart.85991.5>, "species_id"=>"2"}], "authenticity_token"=>"u0eN5MAfvGWtfEzrqBt4qfrL54VJ9SGX0jFLZCJ8iRM=", "sighting"=>{"sighting_date(2i)"=>"6", "name"=>"", "sighting_date(3i)"=>"5", "county"=>"0", "notes"=>"", "location"=>"", "sighting_date(1i)"=>"2010", "email"=>""}} My form can have multiple sighting reports with multiple pictures in each sighting report. Here's my controller code: def create_multiple @report = Report.new @report.name = params[:sighting]["name"] @report.sighting_date = Date.civil(params[:sighting][:"sighting_date(1i)"].to_i, params[:sighting][:"sighting_date(2i)"].to_i, params[:sighting][:"sighting_date(3i)"].to_i) @report.county_id = params[:sighting][:county] @report.location = params[:sighting][:location] @report.notes = params[:sighting][:notes] @report.email = params[:sighting][:email] @report.save! @report.reload for sr in params[:sighting_report] do sighting = SightingReport.new sighting.report_id = @report.id sighting.species_id = sr[:species_id] sighting.number_seen = sr[:number_seen] sighting.save if sr[:picture] sighting.reload for pic in sr[:picture] do p = SpeciesPic.new p.uploaded_picture = pic p.species_id = sighting.species_id p.report_id = @report.id p.save! end end end redirect_to :action => 'new_multiple' end

    Read the article

  • Rails - handling global site settings

    - by egarcia
    I'm developing a new rails application which is supposed to be installed several times in order to implement several sites. There are some things, like the "Site Title" or the "Default Number of Items per Page" that clearly belong to a "global settings" table / config file. I've made a list of the things I think I'll need: ActiveRecord model that is capable of: Storing different kinds of data. I suppose this would be accomplished encoding the values on a string on the db, probably with a "type" field. Indexing settings by name Validations based on a "type" attribute (i.e. don't accept invalid dates on "date" settings) Validations based on a allows_nil property. A controller that allows me to change settings via views. I'm pretty sure I could implement this myself, but I'm not willing to reinvent the wheel. I've done some searching, but I could only find rails-settings, which doesn't really serve me: I need a proper model & controller so I can use declarative-authorization, and it does not provide any controller or view facilities. Is there a gem or plugin out there that implements what I want, or any library I should look at? Thanks a lot.

    Read the article

  • Reasonable expectation to support new Operating Systems?

    - by Neil N
    My company has a desktop app originally developed for Windows XP. The original programmer has since been fired (fired with extreme prejudice I might add). I have fixed the app various times but overall try to avoid it, it is a mess and the only real way to fix it is to completely rewrite it, which could take a year. We have been trying to "forget" about this app, and instead steer clients towards our web version, which is more up to date, easier to maintain, easier to extend, and WAY easier to support. Most clients agree, the web version is just better all around. However we have one client that insists on using the desktop app. The app required a little duct tape to get working on Vista, but now completely breaks on Windows 7. I'm not even sure WHAT all the fixes are to get it working on Win7 (the current time estimate stands at "miracle") but after both installing the RELEASE build, and running the DEBUG build from Visual Studio, the app has errors on nearly every user action, and from what I can see from a high level test run, none of them are related. Since Windows 7 did not exist when this app was developed, is my company really expected to make all the required changes to make it function as "smoothly" as it did on XP?

    Read the article

  • Javascript: Error 'Object Required.'

    - by javascripthelp
    The following is the error popup message I get when I click "Finalize" button on my website: "Line: 298 Char: 5 Error: Object required: 'lobi_c_selected(...)' Code: 0 URL: http://10.128.23.50/i-prostage/AP/w_ap_check_reconciliation.asp?" Normally, when I click "Finalize" button, it'll generate and show a report in a popup window. However, I'd get this error message instead. Can any of you help me locate the error in the following source code for the page that I'm running in IE 6? sub cb_finalize dim ll_loop, ll_found, lobj_c_selected of_SetHourGlass(True) rpt_link.innerHTML = "" rpt_link.href = "" 'Process only if at least one record was selected if rds1.Recordset.Recordcount 0 then lb_found = false if rds1.Recordset.Recordcount = 1 then if c_selected.checked then lb_found = true else Set lobj_c_selected = document.all.item("c_selected") for ll_loop = 1 to rds1.Recordset.Recordcount if lobj_c_selected(ll_loop - 1).checked then lb_found = true exit for end if next end if if not lb_found then msgbox "Please select a record to be posted.", vbInformation, "ProStage Accounting" of_SetHourGlass(False) else window.setTimeout "ue_process()", 100, vbscript 'Post Event end if else msgbox "There's no record to be posted." + vbcrlf + "Please select a record to be posted.", vbInformation, "ProStage Accounting" of_SetHourGlass(False) end if end sub Sub ue_process dim ll_loop, ll_count, ls_ret, ls_trxid, ls_r1 'Get only selected records redim ls_trxid(rds1.Recordset.Recordcount) for ll_loop = 1 to rds1.Recordset.Recordcount rds1.Recordset.AbsolutePosition = ll_loop if not isnull(rds1.Recordset("clrdt")) then 'Add to TRXID array if selected ll_count = ll_count + 1 ls_trxid(ll_count) = rds1.Recordset("trxid") end if next 'Process reconciliation rds1.Recordset.MarshalOptions = 1 ls_ret = iBO_Update.of_update_1(is_dbsrc, rds1.Recordset, "GLTRX", is_sql) if ls_ret < "1" then msgbox "Update Failed ! " + ls_ret, vbExclamation + vbOKonly, document.title of_SetHourGlass(False) else 'Display Posting Journal & clear screen ue_posting_journal ls_trxid, ll_count Set rds1.SourceRecordset = iBO_Company.of_validate(is_dbsrc, "SELECT 1 FROM DUAL WHERE 1 = 2") ib_query = false 'Not to process RetrieveEnd end if End Sub Sub ue_posting_journal(as_trxid, al_count) dim ll_argseq, ls_argtyp, ls_argmnt, ll_sargseq of_setreport() ' Start service 'Prepare arguments for report in RPTMSTR table for ll_loop = 1 to al_count + 1 select case ll_loop case 1 'Range displayed as report title ll_argseq = 800 ls_argtyp = null ls_argmnt = "st_title.text = 'Bank: " + bnkid_name.value + ", As of Date: " + _ of_date_stringtodate(id_trxdt) + "'" ll_sargseq = 0 case else 'TRXID array ll_argseq = 1 ll_sargseq = ll_loop - 1 ls_argtyp = "S" ls_argmnt = as_trxid(ll_loop - 1) end select of_report_register_array "d_rpt_ap_check_reconciliation_register", ll_argseq, ls_argtyp, ls_argmnt, ll_sargseq next of_report_process "d_rpt_ap_check_reconciliation_register", true, true 'Display report of_sethourglass(False) End Sub

    Read the article

  • Java Scanner won't follow file

    - by Steve Renyolds
    Trying to tail / parse some log files. Entries start with a date then can span many lines. This works, but does not ever see new entries to file. File inputFile = new File("C:/test.txt"); InputStream is = new FileInputStream(inputFile); InputStream bis = new BufferedInputStream(is); //bis.skip(inputFile.length()); Scanner src = new Scanner(bis); src.useDelimiter("\n2010-05-01 "); while (true) { while(src.hasNext()){ System.out.println("[ " + src.next() + " ]"); } } Doesn't seem like Scanner's next() or hasNext() detects new entries to file. Any idea how else I can implement, basically, a tail -f with custom delimiter. ok - using Kelly's advise i'm checking & refreshing the scanner, this works. Thank you !! if anyone has improvement suggestions plz do! File inputFile = new File("C:/test.txt"); InputStream is = new FileInputStream(inputFile); InputStream bis = new BufferedInputStream(is); //bis.skip(inputFile.length()); Scanner src = new Scanner(bis); src.useDelimiter("\n2010-05-01 "); while (true) { while(src.hasNext()){ System.out.println("[ " + src.next() + " ]"); } Thread.sleep(50); if(bis.available() > 0){ src = new Scanner(bis); src.useDelimiter("\n2010-05-01 "); } }

    Read the article

  • Browser: Cookie lost on refresh

    - by Nirmal
    I am experiencing a strange behaviour of my application in Chrome browser (No problem with other browsers). When I refresh a page, the cookie is being sent properly, but intermittently the browser doesn't seem to pass the cookie on some refreshes. This is how I set my cookie: $identifier = / some weird string /; $key = md5(uniqid(rand(), true)); $timeout = number_format(time(), 0, '.', '') + 43200; setcookie('fboxauth', $identifier . ":" . $key, $timeout, "/", "fbox.mysite.com", 0); This is what I am using for page headers: header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Thu, 25 Nov 1982 08:24:00 GMT"); // Date in the past Do you see any issue here that might affect the cookie handling? Thank you for any suggestion. EDIT-01: It seems that the cookie is not being sent with some requests. This happens intermittently and I am seeing this behaviour for ALL the browsers now. Has anyone come across such situation? Is there any situation where a cookie will not be sent with the request? Thanks again, for any guideline.

    Read the article

  • UICollectionView with one static cell and N dynamic ones from a fetchresultscontroller exception

    - by nflacco
    I'm trying to make a UITableView that shows a blog post and comments for that post. My setup is a tableview in storyboard with two dynamic prototype cells. The first cell is for the post and should never change. The second cell represents the 0 to N comments. My cellForRowAtIndexPath method shows the post cell properly, but fails to get the comment at the given index path (though if I comment out the fetch I get the appropriate number of comment cells with a green background that I set as a visual debug thing). let comment = fetchedResultController.objectAtIndexPath(indexPath) as Comment I get the following exception on this line: 2014-08-24 15:06:40.712 MessagePosting[21767:3266409] *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[__NSArrayM objectAtIndex:]: index 1 beyond bounds [0 .. 0]' *** First throw call stack: ( 0 CoreFoundation 0x0000000101aa43e5 __exceptionPreprocess + 165 1 libobjc.A.dylib 0x00000001037f9967 objc_exception_throw + 45 2 CoreFoundation 0x000000010198f4c3 -[__NSArrayM objectAtIndex:] + 227 3 CoreData 0x00000001016e4792 -[NSFetchedResultsController objectAtIndexPath:] + 162 Section and cell setup: override func tableView(tableView: UITableView!, numberOfRowsInSection section: Int) -> Int { // #warning Incomplete method implementation. // Return the number of rows in the section. switch section { case 0: return 1 default: if let realPost:Post = post { return fetchedResultController.sections[0].numberOfObjects } else { return 0 } } } override func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell! { switch indexPath.section { case 0: let cell = tableView.dequeueReusableCellWithIdentifier(postViewCellIdentifier, forIndexPath: indexPath) as UITableViewCell cell.backgroundColor = lightGrey if let realPost:Post = self.post { cell.textLabel.text = realPost.text } return cell default: let cell = tableView.dequeueReusableCellWithIdentifier(commentCellIdentifier, forIndexPath: indexPath) as UITableViewCell cell.backgroundColor = UIColor.greenColor() let comment = fetchedResultController.objectAtIndexPath(indexPath) as Comment // <---------------------------- :( cell.textLabel.text = comment.text return cell } } FRC: func controllerDidChangeContent(controller: NSFetchedResultsController!) { tableView.reloadData() } func getFetchedResultController() -> NSFetchedResultsController { fetchedResultController = NSFetchedResultsController(fetchRequest: taskFetchRequest(), managedObjectContext: managedObjectContext, sectionNameKeyPath: nil, cacheName: nil) return fetchedResultController } func taskFetchRequest() -> NSFetchRequest { if let realPost:Post = self.post { let fetchRequest = NSFetchRequest(entityName: "Comment") let sortDescriptor = NSSortDescriptor(key: "date", ascending: false) fetchRequest.predicate = NSPredicate(format: "post = %@", realPost) fetchRequest.sortDescriptors = [sortDescriptor] return fetchRequest } else { return NSFetchRequest(entityName: "") } }

    Read the article

  • Numerating Comments

    - by John
    The code below prints out all comments for a given "submissionid" in chronological order. How could I numerate these comments? (In other words, how do I print out a "1." next to the oldest comment, a "2." next to the second-oldest comment, etc.?) $submission = mysql_real_escape_string($_GET['submission']); $submissionid = mysql_real_escape_string($_GET['submissionid']); $sqlStr = "SELECT comment.comment, comment.datecommented, login.username FROM comment LEFT JOIN login ON comment.loginid=login.loginid WHERE submissionid=$submissionid ORDER BY comment.datecommented ASC LIMIT 100"; $result = mysql_query($sqlStr); $arr = array(); echo "<table class=\"commentecho\">"; while ($row = mysql_fetch_array($result)) { echo '<tr>'; echo '<td class="commentname1">'.stripslashes($row["comment"]).'</td>'; echo '</tr>'; echo '<tr>'; echo '<td class="commentname2"><a href="http://www...com/sandbox/members/index.php?profile='.$row["username"].'">'.$row["username"].'</a>'.date('l, F j, Y &\nb\sp &\nb\sp g:i a &\nb\sp &\nb\sp \N\E\W &\nb\sp \Y\O\R\K &\nb\sp \T\I\M\E', strtotime($row["datecommented"])).'</td>'; echo '</tr>'; } echo "</table>"

    Read the article

  • A couple of questions about NHibernate's GuidCombGenerator

    - by Eyvind
    The following code can be found in the NHibernate.Id.GuidCombGenerator class. The algorithm creates sequential (comb) guids based on combining a "random" guid with a DateTime. I have a couple of questions related to the lines that I have marked with *1) and *2) below: private Guid GenerateComb() { byte[] guidArray = Guid.NewGuid().ToByteArray(); // *1) DateTime baseDate = new DateTime(1900, 1, 1); DateTime now = DateTime.Now; // Get the days and milliseconds which will be used to build the byte string TimeSpan days = new TimeSpan(now.Ticks - baseDate.Ticks); TimeSpan msecs = now.TimeOfDay; // *2) // Convert to a byte array // Note that SQL Server is accurate to 1/300th of a millisecond so we divide by 3.333333 byte[] daysArray = BitConverter.GetBytes(days.Days); byte[] msecsArray = BitConverter.GetBytes((long) (msecs.TotalMilliseconds / 3.333333)); // Reverse the bytes to match SQL Servers ordering Array.Reverse(daysArray); Array.Reverse(msecsArray); // Copy the bytes into the guid Array.Copy(daysArray, daysArray.Length - 2, guidArray, guidArray.Length - 6, 2); Array.Copy(msecsArray, msecsArray.Length - 4, guidArray, guidArray.Length - 4, 4); return new Guid(guidArray); } First of all, for *1), wouldn't it be better to have a more recent date as the baseDate, e.g. 2000-01-01, so as to make room for more values in the future? Regarding *2), why would we care about the accuracy for DateTimes in SQL Server, when we only are interested in the bytes of the datetime anyway, and never intend to store the value in an SQL Server datetime field? Wouldn't it be better to use all the accuracy available from DateTime.Now?

    Read the article

  • Post method in Servlet is not being called again after being executed once

    - by SaurabhCsIITKgp
    I am implementing a database based web application using servlets. Now, when I input a parameter using a form in the jsp page, it redirects it to a servlet which subsequently adds the value to the database (the servlet creates a new table if the table doesn't already exist). The creation of the table and the addition of value works fine if the table doesn't already exists. Once it is created however and the parameter is inputted again in the form, the submit button no longer redirects it to the servlet. Nor is the value added to the database. Kindly advise me as to where I am going wrong. Following are the snippets of my code: From the JSP page (/showmanager is the urlpattern of the servlet): <form action="showmanager" method="post"> <h3>Enter name of the show: </h3> <input type="text" name="showname" value=""> <input type="hidden" name="task" value="addshow" /> <input type="button" value="Add Show"> </form> From the servlet (POST method): p rotected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if(request.getParameter("task").equals("addshow")){ this.addShow(request.getParameter("showname")); response.sendRedirect("showmanager.jsp"); } } Method to add in database: protected boolean addShow(String showname){ try{ statement =con.prepareStatement("INSERT INTO showdb10(name) VALUES ('"+showname+"')"); if(statement.executeUpdate()>0){ return true; } } catch(Exception e) { try{ statement =con.prepareStatement("create table showdb10 (id int NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, date varchar(20),time varchar(20), b_total int, o_total int, b_avbl int, o_avbl int, b_price double(10,2), o_price double(10,2), seat_no varchar(20), transaction_id varchar(255), total_sales double(10,2), paymnt_artists double(10,2), paymnt_othr double(10,2), flag varchar(20), PRIMARY KEY(id))"); statement.executeUpdate(); statement =con.prepareStatement("INSERT INTO showdb10(name) VALUES ('"+showname+"')"); if(statement.executeUpdate()>0){ return true; } }catch(Exception e2){} } return false; }

    Read the article

  • IIS6 compressing static files, but not JSON requests

    - by user500038
    I have IIS6 compression setup, and static content is being compressed correctly as per Coding Horror. Example of a static file: Response Headers Content-Length 55513 Content-Type application/x-javascript Content-Encoding gzip Last-Modified Mon, 20 Dec 2010 15:31:58 GMT Accept-Ranges bytes Vary Accept-Encoding Server Microsoft-IIS/6.0 X-Powered-By ASP.NET Date Wed, 29 Dec 2010 16:37:23 GMT Request Headers Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive As you can see, the response is correctly compressed. However with a JSON call you'll see the request with the gzip parameter correctly set: Accept application/json, text/javascript, */*; q=0.01 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive X-Requested-With XMLHttpRequest Then for the response: Cache-Control private Content-Length 6811 Content-Type application/json; charset=utf-8 Server Microsoft-IIS/6.0 X-Powered-By ASP.NET X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 No gzip! I found this article by Rick Stahl, which outlines how to compress in your code, but I'd like IIS to handle this. Is this possible with IIS6?

    Read the article

  • How to associate static entity instances in a Session without database retrieval

    - by Michael Hedgpeth
    I have a simple Result class that used to be an Enum but has evolved into being its own class with its own table. public class Result { public static readonly Result Passed = new Result(StatusType.Passed) { Id = [Predefined] }; public static readonly Result NotRun = new Result(StatusType.NotRun) { Id = [Predefined] }; public static readonly Result Running = new Result(StatusType.Running) { Id = [Predefined] }; } Each of these predefined values has a row in the database at their predefined Guid Id. There is then a failed result that has an instance per failure: public class FailedResult : Result { public FailedResult(string description) : base(StatusType.Failed) { . . . } } I then have an entity that has a Result: public class Task { public Result Result { get; set; } } When I save a Task, if the Result is a predefined one, I want NHibernate to know that it doesn't need to save that to the database, nor does it need to fetch it from the database; I just want it to save by Id. The way I get around this is when I am setting up the session, I call a method to load the static entities: protected override void OnSessionOpened(ISession session) { LockStaticResults(session, Result.Passed, Result.NotRun, Result.Running); } private static void LockStaticResults(ISession session, params Result[] results) { foreach (var result in results) { session.Load(result, result.Id); } } The problem with the session.Load method call is it appears to be fetching to the database (something I don't want to do). How could I make this so it does not fetch the database, but trusts that my static (immutable) Result instances are both up to date and a part of the session?

    Read the article

  • Custom webserver caching

    - by Mark Kinsella
    I'm working with a custom webserver on an embedded system and having some problems correctly setting my HTTP Headers for caching. Our webserver is generating all dynamic content as XML and we're using semi-static XSL files to display it with some dynamic JSON requests thrown in for good measure along with semi-static images. I say "semi-static" because the problems occur when we need to do a firmware update which might change the XSL and image files. Here's what needs to be done: cache the XSL and image files and do not cache the XML and JSON responses. I have full control over the HTTP response and am currently: Using ETags with the XSL and image files, using the modified time and size to generate the ETag Setting Cache-Control: no-cache on the XML and JSON responses As I said, everything works dandy until a firmware update when the XSL and image files are sometimes cached. I've seen it work fine with the latest versions of Firefox and Safari but have had some problems with IE. I know one solution to this problem would be simply rename the XSL and image files after each version (eg. logo-v1.1.png, logo-v1.2.png) and set the Expires header to a date in the future but this would be difficult with the XSL files and I'd like to avoid this. Note: There is a clock on the unit but requires the user to set it and might not be 100% reliable which is what might be causing my caching issues when using ETags. What's the best practice that I should employ? I'd like to avoid as many webserver requests as possible but invalidating old XSL and image files after a software update is the #1 priority.

    Read the article

  • Saving and retrieving image in SQL database from C# problem

    - by Mobin
    I used this code for inserting records in a person table in my DB System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); this.personTableAdapter1.Insert(NIC_Box.Text.Trim(), Application_Box.Text.Trim(), Name_Box.Text.Trim(), Father_Name_Box.Text.Trim(), DOB_TimePicker.Value.Date, Address_Box.Text.Trim(), City_Box.Text.Trim(), Country_Box.Text.Trim(), ms.GetBuffer()); but when i retrieve this with this code byte[] image = (byte[])Person_On_Application.Rows[0][8]; MemoryStream Stream = new MemoryStream(); Stream.Write(image, 0, image.Length); Bitmap Display_Picture = new Bitmap(Stream); Image_Box.Image = Display_Picture; it works perfectly fine but if i update this with my own generated Query like System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); Query = "UPDATE Person SET Person_Image ='"+ms.GetBuffer()+"' WHERE (Person_NIC = '"+NIC_Box.Text.Trim()+"')"; the next time i use the same code for retrieving the image and displaying it as used above . Program throws an exception

    Read the article

  • Google App Engine JDO error caused by GregorianCalendar ?

    - by Frank
    My class looks like this : import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; @PersistenceCapable(identityType=IdentityType.APPLICATION) public class Contact_Info implements Serializable { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) Long Id; public static final long serialVersionUID=26362862L; @Persistent String Contact_Id=""; @Persistent GregorianCalendar Date_1; public Contact_Info() { } public void setId(Long value) { Id=value; } public Long getId() { return Id; } public void setContact_Id(String value) { Contact_Id=value; } public String getContact_Id() { return Contact_Id; } public void setDate_1(GregorianCalendar value) { Date_1=value; } public GregorianCalendar getDate_1() { return Date_1; } public String toString() { return Contact_Id; } } When it's run, I got the following error : java.lang.UnsupportedOperationException org.datanucleus.store.appengine.EntityUtils.getPropertyName(EntityUtils.java:62) org.datanucleus.store.appengine.DatastoreFieldManager.storeObjectField(DatastoreFieldManager.java:839) org.datanucleus.state.AbstractStateManager.providedObjectField(AbstractStateManager.java:1037) PayPal_Monitor.Contact_Info.jdoProvideField(Contact_Info.java) PayPal_Monitor.Contact_Info.jdoProvideFields(Contact_Info.java) org.datanucleus.state.JDOStateManagerImpl.provideFields(JDOStateManagerImpl.java:2715) org.datanucleus.store.appengine.DatastorePersistenceHandler.insertPreProcess(DatastorePersistenceHandler.java:341) org.datanucleus.store.appengine.DatastorePersistenceHandler.insertObjects(DatastorePersistenceHandler.java:251) If I take out the "GregorianCalendar Date_1", it works correctly, what should I do to fix it ? I do need the date in it. Frank

    Read the article

  • How to use SQLErrorCodeSQLExceptionTranslator and DAO class with @Repository in Spring?

    - by GuidoMB
    I'm using Spring 3.0.2 and I have a class called MovieDAO that uses JDBC to handle the db. I have set the @Repository annotations and I want to convert the SQLException to the Spring's DataAccessException I have the following example: @Repository public class JDBCCommentDAO implements CommentDAO { static JDBCCommentDAO instance; ConnectionManager connectionManager; private JDBCCommentDAO() { connectionManager = new ConnectionManager("org.postgresql.Driver", "postgres", "postgres"); } static public synchronized JDBCCommentDAO getInstance() { if (instance == null) instance = new JDBCCommentDAO(); return instance; } @Override public Collection<Comment> getComments(User user) throws DAOException { Collection<Comment> comments = new ArrayList<Comment>(); try { String query = "SELECT * FROM Comments WHERE Comments.userId = ?"; Connection conn = connectionManager.getConnection(); PreparedStatement stmt = conn.prepareStatement(query); stmt = conn.prepareStatement(query); stmt.setInt(1, user.getId()); ResultSet result = stmt.executeQuery(); while (result.next()) { Movie movie = JDBCMovieDAO.getInstance().getLightMovie(result.getInt("movie")); comments.add(new Comment(result.getString("text"), result.getInt("score"), user, result.getDate("date"), movie)); } connectionManager.closeConnection(conn); } catch (SQLException e) { e.printStackTrace(); //CONVERT TO DATAACCESSEXCEPTION } return comments; } } I Don't know how to get the Translator and I don't want to extends any Spring class, because that is why I'm using the @Repository annotation

    Read the article

  • How do I bind an iTunes style source list to an NSTableView using Core Data?

    - by Austin
    I have an iTunes style interface in my application: Source list (NSOutlineView) on the left that contains different libraries and playlists with an NSTableView on the right side of the interface displaying information for "Presentations". Similar to iTunes, I am showing the same type of information in the table view whether a library or playlist is selected (title, author, date created, etc). I currently have an NSArrayController connected to my NSTableView and was setting the fetch predicate based on what was selected in the source list. This works fine when selecting a library because I can just set the fetch predicate to filter by the "type" field in my Presentation Core Data entity. When I try to adjust the fetch predicate for the playlist however, it doesn't look like there is any way to set the fetch predicate because I've got a table in between Playlists and Presentations to keep up with the order within the Playlist. According to the Apple docs, these type of predicates are not doable with Core Data (it basically doesn't multiple inner joins). Below is the relevant portion of my Data Model. Is my data model setup incorrectly? Should I drop the NSArrayController and handle connecting the NSTableView up by hand? I'm trying to figure out if there is a simple fix, or really a design flaw.

    Read the article

  • Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to

    - by Simon B.
    I have a folder with ~10 000 subfolders. Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to setup inotify for each folder? (i.e. I loose if I want to do this for 10k+ folders). I guess yes, since I've already seen examples of this inefficient method, for instance http://twistedmatrix.com/trac/browser/trunk/twisted/internet/inotify.py?rev=28866#L345 My problem: I need to keep folders time-sorted with most recently active "project" up top. When a file changes, each folder above that file should update its last-modified timestamp to match the file. Delays are ok. Opening a file (typically MS Excel) and closing again, its file date can jump up and then down again. For this reason I need to wait until after a file is closed, then queue the folder of that file for checking, and only a while later do I go and look for the newest file in its folder, since the filedate of the triggering file could already be back-dated to its original timestamp by Excel or similar programs. Also in case several files from same folder are used/created, it makes sense to buffer timestamping of that folders' parents to at least get a bunch of updates collapsed into one delayed update. I'm looking for a linux solution. I have some code that can be run on a windows server, most of the queing functionality is here: http://github.com/sesam/FolderdateFollowsFiles/blob/master/FolderdateFollowsFiles/Follower.vb Available API:s The relative of inotify on windows, ReadDirectoryChangesW, can watch a folder and its whole subtree; see bWatchSubtree on http://msdn.microsoft.com/en-us/library/aa365465(VS.85).aspx Samba? Patching samba source is a possibility, but perhaps there are already hooks available? Other possibilities, like client side (various windows versions) and spying on file activities in order to update folders recursively?

    Read the article

  • Is there something wrong with this code? AMFReader vs AMFWriter

    - by Triynko
    Something doesn't seem right about the source code for Flash Remoting's Date(AS3) <- DateTime(.NET) stream reader/writer methods, when it comes to handling UTC <- Local times. It seems to write the DateTime data fine, including a 64-bit representation as milliseconds elapsed since Jan 1, 1970, as well as a UTC offset. public void WriteDateTime(DateTime d) { this.BaseStream.WriteByte(11); DateTime time = new DateTime(0x7b2, 1, 1); long totalMilliseconds = (long)d.Subtract(time).TotalMilliseconds; long l = BitConverter.DoubleToInt64Bits((double)totalMilliseconds); this.WriteLong(l); int hours = TimeZone.CurrentTimeZone.GetUtcOffset(DateTime.Today).Hours; this.WriteShort(hours); } But when the data is read... it seems to be ignoring the short UTC offset value that was written, and appears to just discard it! private DateTime ReadDateValue() { long num2 = (long)this.ReadDouble(); DateTime time2 = new DateTime(0x7b2, 1, 1).AddMilliseconds((double)num2); int num3 = this.ReadInt16() / 60; //num3 is not used for anything! return time2; } Can anyone make sense of this? I also found some similar source code for AMFReader here, which has a ReadDateTime method, that seems to do something very similar... but goes on to use the UTC offset for something.

    Read the article

  • Resetting a PChar variable

    - by scott-thornton
    Hi, I don't know much about delphi win 32 programming, but I hope someone can answer my question. I get duplicate l_sGetUniqueIdBuffer saved into the database which I want to avoid. The l_sGetUniqueIdBuffer is actually different ( the value of l_sAuthorisationContent is xml, and I can see a different value generated by the call to getUniqueId) between rows. This problem is intermittant ( duplicates are rare...) There is only milliseconds difference between the update date between the rows. Given: ( unnesseary code cut out) l_sGetUniqueIdBuffer: PChar; FOutputBufferSize : integer; FOutputBufferSize := 1024; while( not dmAccomClaim.ADOQuClaimIdentification.Eof ) do begin // Get a unique id for the request l_sGetUniqueIdBuffer := AllocMem (FOutputBufferSize); l_returnCode := getUniqueId (m_APISessionId^, l_sGetUniqueIdBuffer, FOutputBufferSize); dmAccomClaim.ADOQuAddContent.Active := False; dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pContent').Value := (WideString(l_sAuthorisationContent)); dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pClaimId').Value := dmAccomClaim.ADOQuClaimIdentification.FieldByName('SB_CLAIM_ID').AsString; dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pUniqueId').Value := string(l_sGetUniqueIdBuffer); dmAccomClaim.ADOQuAddContent.ExecSQL; FreeMem( l_sAuthorisationContent, l_iAuthoriseContentSize ); FreeMem( l_sGetUniqueIdBuffer, FOutputBufferSize ); end; I guess i need to know, is the value in l_sGetUniqueIdBuffer being reset for every row??

    Read the article

  • Reading DATA from an OBJECT asp.net MVC C#

    - by kalyan
    Hi, I am new to the MVC and I am stuck with a wierd situation. I have to read the Data from the type object and I tried different ways and I couldn't get a solution.Please help. IList<User> u = new UserRepository().Getuser(Name.ToUpper(), UserName.ToUpper(), UserCertNumber.ToUpper(), Date.ToUpper(), UserType.ToUpper(), Company.ToUpper(), PageNumber, Orderby, SearchALL.ToUpper(), PrintAllPages.ToUpper()); object[] users = new object[u.Count]; for (int i = 0; i < u.Count; i++) { users[i] = new { Id = u[i].UserId, Title = u[i].Title, FirstName = u[i].FirstName, LastName = u[i].LastName, Privileges = (from apps in u[i].UserPrivileges select new { PrivilegeId = apps.Privilege.PrivilegeId, PrivilegeName = apps.Privilege.Name, DeactiveDate = apps.DeactiveDate }), Status = (from status in u[i].UserStatus select new { StatusId = status.Status.StatusId, StatusName = status.Status.StatusName, DeactiveDate = status.DeactiveDate }), ActiveDate = u[i].ActiveDate, UserName = u[i].Email, UserCN = (from cert in u[i].UserCertificates select new { CertificateNumber = cert.CertificateNumber, DeactiveDate = cert.DeactiveDate }), Company = u[i].Company.Name }; } string x = ""; string y = ""; var report = users; foreach (var r in report) { x = r[0].....; i want to assign the values from the report to something else and I am not able to read the data from the report object. Please help. } Thank you.

    Read the article

  • database schema eligible for delta synchronization

    - by WilliamLou
    it's a question for discussion only. Right now, I need to re-design a mysql database table. Basically, this table contains all the contract records I synchronized from another database. The contract record can be modified, deleted or users can add new contract records via GUI interface. At this stage, the table structure is exactly the same as the Contract info (column: serial number, expiry date etc.). In that case, I can only synchronize the whole table (delete all old records, replace with new ones). If I want to delta(only synchronize with modified, new, deleted records) synchronize the table, how should I change the database schema? here is the method I come up with, but I need your suggestions because I think it's a common scenario in database applications. 1)introduce a sequence number concept/column: for each sequence, mark the new added records, modified records, deleted records with this sequence number. By recording the last synchronized sequence number, only pass those records with higher sequence number; 2) because deleted contracts can be added back, and the original table has primary key constraints, should I create another table for those deleted records? or add a flag column to indicate if this contract has been deleted? I hope I explain my question clearly. Anyway, if you know any articles or your own suggestions about this, please let me know. Thanks!

    Read the article

  • Order in many to many relation in Django model

    - by Pietro Speroni
    I am writing a small website to store the papers I have written. The relation papers<- author is important, but the order of the name of the authors (which one is First Author, which one is second order, and so on) is also important. I am just learning Django so I don't know much. In any case so far I have done: from django.db import models class author(models.Model): Name = models.CharField(max_length=60) URLField = models.URLField(verify_exists=True, null=True, blank=True) def __unicode__(self): return self.Name class topic(models.Model): TopicName = models.CharField(max_length=60) def __unicode__(self): return self.TopicName class publication(models.Model): Title = models.CharField(max_length=100) Authors = models.ManyToManyField(author, null=True, blank=True) Content = models.TextField() Notes = models.TextField(blank=True) Abstract = models.TextField(blank=True) pub_date = models.DateField('date published') TimeInsertion = models.DateTimeField(auto_now=True) URLField = models.URLField(verify_exists=True,null=True, blank=True) Topic = models.ManyToManyField(topic, null=True, blank=True) def __unicode__(self): return self.Title This work fine in the sense that I now can define who the authors are. But I cannot order them. How should I do that? Of course I could add a series of relations: first author, second author,... but it would be ugly, and would not be flexible. Any better idea? Thanks

    Read the article

  • Can anyone simplify this Algorithm for me?

    - by Titan
    Basically I just want to check if one time period overlaps with another. Null end date means till infinity. Can anyone shorten this for me as its quite hard to read at times. Cheers public class TimePeriod { public DateTime StartDate { get; set; } public DateTime? EndDate { get; set; } public bool Overlaps(TimePeriod other) { // Means it overlaps if (other.StartDate == this.StartDate || other.EndDate == this.StartDate || other.StartDate == this.EndDate || other.EndDate == this.EndDate) return true; if(this.StartDate > other.StartDate) { // Negative if (this.EndDate.HasValue) { if (this.EndDate.Value < other.StartDate) return true; if (other.EndDate.HasValue && this.EndDate.Value < other.EndDate.Value) return true; } // Negative if (other.EndDate.HasValue) { if (other.EndDate.Value > this.StartDate) return true; if (this.EndDate.HasValue && other.EndDate.Value > this.EndDate.Value) return true; } else return true; } else if(this.StartDate < other.StartDate) { // Negative if (this.EndDate.HasValue) { if (this.EndDate.Value > other.StartDate) return true; if (other.EndDate.HasValue && this.EndDate.Value > other.EndDate.Value) return true; } else return true; // Negative if (other.EndDate.HasValue) { if (other.EndDate.Value < this.StartDate) return true; if (this.EndDate.HasValue && other.EndDate.Value < this.EndDate.Value) return true; } } return false; } }

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >