Search Results

Search found 24207 results on 969 pages for 'anonymous users'.

Page 678/969 | < Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • High-Performance In-Browser Networking

    - by Jon Purdy
    (Similar in spirit to but different in practice from this question.) Is there any cross-browser-compatible, in-browser technology that allows a high-performance perstistent network connection between a server application and a client written in, say, Javascript? Think XmlHttpRequest on caffeine. I am working on a visualisation system that's restricted to at most a few users at once, and the server is pretty robust, so it can handle as much as it needs to. I would like to allow the client to have access to video streamed from the server at a minimum of about 20 frames per second, regardless of what their graphics hardware capabilities are. Simply put: is this doable without resorting to Flash or Java?

    Read the article

  • JQuery - Ajax saving sortables in connected lists when sortable item is moved

    - by Ben Sinclair
    I have multiple JQuery sortable lists that connect with each other... They allow you to assign users to certain roles. Basically what I want to do is when a user is dragged from one list to another, I want JQuery to pick up the first list that the user was moved from so that I can send an AJAX request to delete it from that list in my database. I tried the following but every time you move the user over a list and haven't even dropped it within a list, it sends this request which means I'll be sending multiple AJAX requests... Does that make sense? $( ".selector" ).sortable({ out: function(event, ui) { ... } }); From my testing, I can use the following code to just update the list that user has been moved to so I've got the second half covered: $( ".selector" ).sortable({ receive: function(event, ui) { ... } }); Hopefully I am making sense :)

    Read the article

  • Loginview control asp.net mvc

    - by vikitor
    Hello all, I have been searching and haven't got luck, I got a tutorial to use the loginview control in order to display or hide parts of the views to different user roles in my application. The thing is that the tutorial I've found is for ASP.NET and I've been told by one of my colleages that it is the same framework for ASP.NET MVC but the way to use it is different. Have you got any good tutorial to recommend? EDIT: I've got all my application set up, and the login and the roles already configured (via asp.net membership provider). This is all already running. The thing is that if I have role a and role b I want role a to be able to actually see the links to the actions it is authorized to work with, and not b for example. If in the Index of my application I've got a link to "Edit" and only the role a can access to the action, then it will be displayed just for logged in users that belong to role a, and not to those who belong to role b Thank you, Vikitor

    Read the article

  • Setting a cookie before Javascript Redirection

    - by Jason
    Hello, I have a Rails app where I set a set a session variable the moment a user lands on my site with the referer and the page they hit. Additionally, I have Google Optimizer sending traffic from my homepage to various landing pages. The problem is that I think Google Optimizer is sending users away before the cookie is set. Is that even possible? I believe that the cookie is set from the HTTP Header, which must have fully loaded before Google's Javascript has even loaded. Thanks, Jason

    Read the article

  • How to specify custom Sass directory with sinatra

    - by yaya3
    Instead of serving my Sass files from the default 'views' directory I'd like to change this to /assets/sass The following attempts are in my main ruby root file in the app: Attempt 1 set :sass, Proc.new { File.join(root, "assets/sass") } get '/stylesheet.css' do sass :core end With this I get the following error: myapp.rb:17 NoMethodError: undefined method `merge' for "/Users/x/x/x/mysinatraapp/assets/sass":String Attempt 2 get '/stylesheet.css' do sass :'/assets/sass/core' end Attempt 3 get '/stylesheet.css' do sass :'/assets/sass/core' end Both return the following error: Errno::ENOENT: No such file or directory - ./views/assets/sass/core.sass Attempt 4 get '/stylesheet.css' do sass :'../assets/sass/core' end This works! however, there must be something along the lines of set :sass, Proc.new { File.join(root, "assets/sass") } that sets this up for me?

    Read the article

  • How To: Eclipse compile error with Android ADT

    - by Sahat
    This error happens when you try to build & run .xml files (i.e. main.xml or strings.xml) files instead of .java Problem: [2010-05-28 06:42:42] Error in an XML file: aborting build. [2010-05-28 06:42:42] res/layout/main.xml:0: error: Resource entry main is already defined. [2010-05-28 06:42:42] res/layout/main.out.xml:0: Originally defined here. [2010-05-28 06:42:42] /Users/sakhat/Code/Sudoku/res/layout/main.out.xml:1: error: Error parsing XML: no element found Solution: Delete the main.out.xml, if you still can't run, then follow this: Eclipse - Project - Clean... - Choose your project - OK

    Read the article

  • How to control allowed HTML tags in WMD Editor?

    - by Toto
    I am trying to some-how set the valid HTML tags and attributes users would be able to use in WMD Editor in my site. For example, I want to forbid the user to directly set the font size, color, typeface and so on, which is trivial to do with the default settings typing something like: <span style="font-size: 45px; color:#FF0000">Some intrusive text here</span>. I think the way to implement this is through the "wmd_options", but I have not found any documentation or reference regarding this, giving the fact that the 'Options demo' seems to be the only public documentation and it does not show how should I do what I have described above. I've send this same question to [email protected] but didn't get any reply. As stackoverflow uses this editor someone reading this or maybe Jeff knows the answer ;) Thanks in advance!

    Read the article

  • How do you redeploy javascript in Idea when using a Tomcat configuration

    - by Jonny Leeds
    I'm working on a java/javascript webapp that runs on tomcat. We're working with IDEA and I've managed to get debugging set up for both the client and server code at the same time, which is great. I did have hot redeployment of the javascript set up when running Tomcat manually, however I find when running Tomcat through IDEA this doesnt work as it's setting stuff up somewhere in my users folder. I was going to just set up a deployment configuration to go to that folder but I can't see any of the javascript files in there. Is it possible to get the best of both worlds and have debugging and automatic deployment working together?

    Read the article

  • Call up last exception on an ASP.NET error page.

    - by Aren B
    I've got an error page here SiteError.aspx and it's configured correctly in the web.config to go there when unhandled exceptions are encountered. I want to use this page to log the exception that triggered it as well because I only want to LOG the errors that the users encounter (i.e. if SiteError.aspx is ever hit.) This is the code I Have: In the OnLoad(...) in SiteError.aspx Exception lastEx = Context.Server.GetLastError(); if (lastEx != null) log.Error("A site error was encountered", lastEx); However, my log is never showing up in my Output, and If i breakpoint on line 2 (in this example) code execution is never interupted (after letting the exception clear to ASP.NET handling in the debugger.

    Read the article

  • Update facebook status using pyfacebook offline access

    - by Alon Carmel
    Hey, I'm trying to update a user status from a django python app. The user went thru facebook connect and registers to the app. I got sessionkey and fbuid. fb = Facebook(FACEBOOK_API_KEY, FACEBOOK_SECRET_KEY) if fbsessionkey: fb.session_key = fbsessionkey fb.uid = fbuid fb.auth.createToken() fb.auth.getSession() #update the facebook status fb.users.setStatus(status="testing",clear=False) else: pass What am i doing wrong? im getting: Error 104: Incorrect signature Please note the user already granted offline access also. Please help...

    Read the article

  • what possible workarounds are there for "only parameterless constructors are support in Linq to Enti

    - by Ralph Shillington
    In my query I need to return instances of a class that doesn't have a default constructor (specifically this is in a custom Membership provider, and MembershipUser is the culprit) var users = from l in context.Logins select new MembershipUser( Name, l.Username, // username l.Id, // provider key l.MailTo, l.PasswordQuestion, l.Notes.FirstOrDefault().NoteText, l.IsApproved, l.IsLockedOut, l.CreatedOn, l.LastLoginOn.HasValue ? l.LastLoginOn.Value : DateTime.MinValue, l.LastActivityOn.HasValue ? l.LastActivityOn.Value : DateTime.MinValue, DateTime.MinValue, l.LastLockedOutOn.HasValue ? l.LastLockedOutOn.Value : DateTime.MinValue ); is syntacitally correct, but results in a runtime error as Only parameterless constructors and initializers are supported in LINQ to Entities.

    Read the article

  • How do you redirect https to http

    - by mauriciopastrana
    that is, the opposite of what (seemingly) everyone teaches. I have a server on https for which I paid an SSL cert for and a mirror for which I haven't and keep around for just for emergencies so it doesn't merit getting a cert for. On my client's desktops I have SOME shortcuts which point to http://production_server and https://productionserver (both work), however; I know that if my prod. server goes down, then DNS forwarding kicks in and those clients which have https on their shortcut will be staring at https://mirrorserver (Which doesnt work) and a big fat IE7 red screen of uneasyness for my company. Unfortunately, I can't just switch this around at the client level. These users are very computer illiterate: and are very likely to freak out from seeing https "insecurity" errors (specially the way FFX3 and IE7 handle it nowadays: FULL STOP, kinda thankfully, but not helping me here LOL). It's very easy to find apache solutions for http-https redirection, but for the life of me I can't do the opposite. Ideas? Cheers, /mp

    Read the article

  • when is a push notification old?

    - by hookjd
    I have noted that when the iPhone OS receives a push notification, it considers that a user action to click on the action button as a "response" to the push notification for some indefinite period of time. If the user lets the push notification sit on screen for a number of seconds, or lets the phone go to sleep, the phone no longer considers the users action as a response to the push notification itself, and therefore does not launch the corresponding app. So my question is... does anyone know what the precise definition from the iPhone OS is as to how long the phone considers a push notification response to be corresponding to the push? Sorry, I can't find a great way to phrase this question, but I hope it makes sense. I'm guessing its something like 20 seconds from my testing, but I don't see this specifically documented anywhere.

    Read the article

  • Setting Sql server security rights for multiple situations

    - by DanDan
    We have an application which uses an instance of Sql Server locally for its backend storage. The administrator windows login has had its sysadmin right revoked, and instead two sql logins have been created; one for the application with a secret password and one read only login we let users view the raw data with. This was working fine until we moved on FileStreams, which requires intergrated windows authentication. So now the sql server logins must be replaced. As a result, I am now reviewing all of our logins but I am not sure how it is possible. It seems that the application needs full read/write access, yet I still need to lock down writing to the tables so the user cannot login into the database and delete data randomly. Does anyone have any tips for setting multiple levels of security using intergrated windows logins, or can you direct me to any further reading? Some answers can also be found on serverfault: http://serverfault.com/questions/138763/setting-sql-server-security-rights-for-multiple-situations

    Read the article

  • Facebook API friends_get is extremely slow

    - by IkimashoZ
    I have a PHP application running in iFrame mode. I am rendering an <fb:multi-friend-selector condensed="true"> inside of <fb:serverfbml> tags. This is inside a PHP file that calls a function that gets a list of user IDs using $facebook->api_client->friends_get();. The multi-friend selector renders just fine, but, when I leave the friend_get() call uncommented, the page takes between 15-20 seconds to load (confirmed with Firebug)! The goal is to limit the number of users displayed in the selector by building a list of user ids not to display, for use in the friend selector's exclude_ids parameter. And since it's "exclude_ids" and not "include_ids", I can't think of a way of getting around this api call. It seems to me there must be something I can do to make the api call faster, because I've seen friend selectors that load much more quickly.

    Read the article

  • C# WMI Eventwatcher code stopped working on Windows 7 with security exception

    - by Flores
    This is code that worked fine on Windows XP for years. User is not local administrator. WqlEventQuery query = new WqlEventQuery("SELECT * FROM Win32_ProcessStopTrace"); ConnectionOptions co = new ConnectionOptions(); co.EnablePrivileges = true; ManagementEventWatcher watcher = new ManagementEventWatcher(new ManagementScope(@"root\cimv2",co), query); watcher.EventArrived += StopEventArrived; watcher.Start(); This throws an SecurityException on Windows 7, Access Denied when running as a non admin. On XP this works fine without being admin. On this link MS states that 'Windows 7: Low-integrity users have read-only permissions for local WMI operations.'. I guess this is the problem. But I can't find any clue on how to change this.

    Read the article

  • RIA Services: custom autorization

    - by Budda
    Here is a good example how to create custom autorization for RIA services: http://stackoverflow.com/questions/1195326/ria-services-how-can-i-create-custom-authentication In my case a silverlight-pages will be displayed as a part of HTML-content and user authorisation is already implemented on the server-side (ASP.NET Membership is not used). It is required to show on the silverlight pages different information for authorised and non-authorised users. Is there any possibility to track on the Silverlight side if user is already authorized on the server side (on the usual ASP.NET web-site)? Please adivse how to do this. Thank you in advance.

    Read the article

  • App engine datastore - query on Enum fields.

    - by Gopi
    I am using GAE(Java) with JDO for persistence. I have an entity with a Enum field which is marked as @Persistent and gets saved correctly into the datastore (As observed from the Datastore viewer in Development Console). But when I query these entities putting a filter based on the Enum value, it is always returning me all the entities whatever value I specify for the enum field. I know GAE java supports enums being persisted just like basic datatypes. But does it also allow retrieving/querying based on them? Google search could not point me to any such example code. Details: I have printed the Query just before being executed. So in two cases the query looks like - SELECT FROM com.xxx.yyy.User WHERE role == super ORDER BY key desc RANGE 0,50 SELECT FROM com.xxx.yyy.User WHERE role == admin ORDER BY key desc RANGE 0,50 Both above queries return me all the User entities from datastore in spite of datastore viewer showing some Users are of type 'admin' and some are of type 'super'.

    Read the article

  • LINQ query checks for null

    - by user300992
    I have a userList, some users don't have a name (null). If I run the first LINQ query, I got an error saying "object reference not set to an instance of an object" error. var temp = (from a in userList where ((a.name == "john") && (a.name != null)) select a).ToList(); However, if I switch the order by putting the checking for null in front, then it works without throwing any error: var temp = (from a in userList where ((a.name != null) && (a.name == "john")) select a).ToList(); Why is that? If that's pure C# code (not LINQ), I think both would be the same. I don't have SQL profiler, I am just curious what will be the difference when they are being translated on SQL level.

    Read the article

  • [C#] Namespace Orginization and Conventions

    - by Bob Dylan
    So I have a little bit of a problem. I working on a project in C# using The StackOveflow API. You can send it a request like so: http://stackoverflow.com/users/rep/126196/2010-01-01/2010-03-13 And get back something like this JSON response: [{"PostUrl":"1167342", "PostTitle":"Are ref and out in C# the same a pointers in C++?", "Rep":10}, {"PostUrl":"1290595", "PostTitle":"Where can I find a good tutorial on bubbling?", "Rep":10} ... So my problem is that I have some methods like GetJsonResponse() which return the above JSON and SaveTempFile() which saves that JSON response to a temporary file for later use. I not sure if I should create a class for them, or what namespace to put them under. Right now my current namespace hierarchy is like so: StackOverflow.Api.Json. So how should I organize these methods/classes/namespaces?

    Read the article

  • Finding duplicate values in a SQL table

    - by Alex
    It's easy to find duplicates with one field SELECT name, COUNT(email) FROM users GROUP BY email HAVING ( COUNT(email) > 1 ) So if we have a table ID NAME EMAIL 1 John [email protected] 2 Sam [email protected] 3 Tom [email protected] 4 Bob [email protected] 5 Tom [email protected] This query will give us John, Sam, Tom, Tom because they all have the same e-mails. But what I want, is to get duplicates with the same e-mails and names. I want to get Tom, Tom. I made a mistake, and allowed to insert duplicate name and e-mail values. Now I need to remove/change the duplicates. But I need to find them first.

    Read the article

  • Bounce Email handling with PHP??

    - by mcfadder_09
    I am really new to this (not new to php). Here is my scenario: I have 2 emails accounts. [email protected] and [email protected]. I want to send email to all my users with [email protected] but then "reply to" [email protected] (until here, my php script can handle it). When, the email cant be sent, it sent to [email protected], the error message could be 553 (non existent email ...) etc. My question is: How do I direct all those bounce emails (couldn't sent emails) to [email protected] through a handling script to check for the bounce error codes? What programming language should I be for the "handling script"? What would the "handling script" looks like? Can give a sample? OR:(Big Question) What are the procedures I should follow to handle the bounce email ??

    Read the article

  • "possible loss of precision" is Java going crazy or I'm missing something?

    - by Lo'oris
    I'm getting a "loss of precision" error when there should be none, AFAIK. this is an instance variable: byte move=0; this happens in a method of this class: this.move=(this.move<<4)|(byte)(Guy.moven.indexOf("left")&0xF); move is a byte, move is still a byte, and the rest is being cast to a byte. I get this error: [javac] /Users/looris/Sviluppo/dumdedum/client/src/net/looris/android/toutry/Guy.java:245: possible loss of precision [javac] found : int [javac] required: byte [javac] this.move=(this.move<<4)|(byte)(Guy.moven.indexOf("left")&0xF); [javac] ^ I've tried many variations but I still get the same error. I'm now clueless.

    Read the article

< Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >