Search Results

Search found 1714 results on 69 pages for 'utf8 decode'.

Page 49/69 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Encoding regarding Term::Size

    - by sid_com
    Hello! The Term::Size-module jumbles up the encoding. How can I fix this? #!/usr/bin/env perl use warnings; use strict; use 5.010; use utf8; binmode STDOUT, ':encoding(UTF-8)'; use Term::Size; my $string = 'Hällö'; say $string; my $columns = ( Term::Size::chars *STDOUT{IO} )[0]; say $columns; say $string; Output: Hällö 140 H?ll?

    Read the article

  • Why isn't this message subject encoded properly? (php mail)

    - by Camran
    I use this code to send an email: $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); If I use special characters Å Ä Ö from the swedish alphabet, they are not encoded properly, so they turn up like ö for ö. However, this doesn't happen if I change the $to variable to a gmail account email, then they are shown correctly. Anybody got any idea? Thanks UPDATE: When I echo $name, the name is displayed correctly, in utf8, with all special chars nicely shown.

    Read the article

  • Is it possible to use a String as a seed for an instance Random? [duplicate]

    - by danpetruk
    This question already has an answer here: Converting String to long in Java 5 answers I have a String value and I would like to init a random class using the string as a seed? Is it possible to do this, and if so how? UPDATE 1: The string consists not only of ASCII. I can have any utf-8 symbols. UPDATE 2: Why did you close it? The questions really aren't about the same. Firstly because I've utf8 string and secondary because different strings can produce same seeds

    Read the article

  • mysqldb python escaping ? or %s?

    - by asldkncvas
    Dear Everyone, I am currently using mysqldb. What is the correct way to escape strings in mysqldb arguments? Note that E = lambda x: x.encode('utf-8') 1) so my connection is set with charset='utf8'. These are the errors I am getting for these arguments: w1, w2 = u'??', u'??' 1) self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2))) ret = self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2)) ) File "build/bdist.linux-i686/egg/MySQLdb/cursors.py", line 158, in execute TypeError: not all arguments converted during string formatting 2) self.cur.execute("SELECT dist FROM distance WHERE w1=%s AND w2=%s", (E(w1), E(w2))) This works fine, but when w1 or w2 has \ inside, then the escaping obviously failed. I personally know that %s is not a good method to pass in arguemnts due to injection attacks etc.

    Read the article

  • android app does not show up on my device or the emulator in eclipse

    - by Sam
    hey everyone, I have no errors in my app-code what so ever, but when i try to run in on either my cell or my emulator/the avd in eclipse i can't run it because it doesn't show up on either one. this is my console output: [2011-02-04 08:14:58 - Versuch] Uploading Versuch.apk onto device 'CB511L2WTB' [2011-02-04 08:14:58 - Versuch] Installing Versuch.apk... [2011-02-04 08:15:01 - Versuch] Success! [2011-02-04 08:15:01 - Versuch] \Versuch\bin\Versuch.apk installed on device [2011-02-04 08:15:01 - Versuch] Done! and this is my LogCat output, which tells me nothing, but you are the experts ;) 02-04 08:18:10.020: DEBUG/dalvikvm(22167): GC freed 2576 objects / 559120 bytes in 37ms 02-04 08:18:10.700: DEBUG/dalvikvm(6709): GC freed 7692 objects / 478912 bytes in 41ms 02-04 08:18:11.170: DEBUG/dalvikvm(31774): GC freed 3367 objects / 163464 bytes in 122ms 02-04 08:18:13.230: DEBUG/dalvikvm(22167): GC freed 2790 objects / 552328 bytes in 38ms 02-04 08:18:14.650: DEBUG/dalvikvm(6709): GC freed 8443 objects / 540440 bytes in 39ms 02-04 08:18:16.260: DEBUG/dalvikvm(31921): GC freed 214 objects / 9824 bytes in 216ms 02-04 08:18:16.670: DEBUG/dalvikvm(22167): GC freed 3232 objects / 561256 bytes in 40ms 02-04 08:18:18.600: DEBUG/dalvikvm(6709): GC freed 7718 objects / 481952 bytes in 39ms 02-04 08:18:19.210: DEBUG/dalvikvm(1129): GC freed 6898 objects / 275328 bytes in 109ms 02-04 08:18:19.690: DEBUG/dalvikvm(22167): GC freed 2968 objects / 571232 bytes in 39ms 02-04 08:18:21.440: DEBUG/dalvikvm(1212): GC freed 1020 objects / 49328 bytes in 395ms 02-04 08:18:22.570: DEBUG/dalvikvm(6709): GC freed 7893 objects / 495616 bytes in 40ms 02-04 08:18:23.060: DEBUG/dalvikvm(22167): GC freed 3117 objects / 561912 bytes in 41ms 02-04 08:18:25.860: DEBUG/dalvikvm(22167): GC freed 2924 objects / 558448 bytes in 36ms 02-04 08:18:26.350: DEBUG/dalvikvm(32098): GC freed 4662 objects / 495496 bytes in 290ms 02-04 08:18:26.410: DEBUG/dalvikvm(22167): GC freed 1077 objects / 130680 bytes in 33ms 02-04 08:18:27.080: DEBUG/dalvikvm(6709): GC freed 7912 objects / 485368 bytes in 40ms 02-04 08:18:28.190: DEBUG/dalvikvm(22167): GC freed 953 objects / 767272 bytes in 33ms 02-04 08:18:29.500: DEBUG/dalvikvm(1129): GC freed 6756 objects / 270480 bytes in 105ms 02-04 08:18:30.500: WARN/System.err(22536): java.lang.Exception: You must call com.mercuryintermedia.productconfiguration.initialize() first 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.ProductConfiguration.getProductName(ProductConfiguration.java:136) 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.api.rest.Item.getPublishingContainersItems(Item.java:15) 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.getContainerFromServer(ContainerHelper.java:68) 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.run(ContainerHelper.java:46) 02-04 08:18:31.090: DEBUG/dalvikvm(6709): GC freed 10545 objects / 682480 bytes in 49ms 02-04 08:18:31.120: DEBUG/dalvikvm(1813): GC freed 5970 objects / 310912 bytes in 60ms 02-04 08:18:31.320: DEBUG/dalvikvm(22167): GC freed 2468 objects / 539520 bytes in 39ms 02-04 08:18:34.110: DEBUG/dalvikvm(22167): GC freed 2879 objects / 569008 bytes in 35ms 02-04 08:18:34.920: DEBUG/dalvikvm(6709): GC freed 7029 objects / 424632 bytes in 35ms 02-04 08:18:36.150: DEBUG/dalvikvm(9060): GC freed 564 objects / 27840 bytes in 89ms 02-04 08:18:36.630: DEBUG/dalvikvm(22167): GC freed 2437 objects / 554000 bytes in 35ms 02-04 08:18:38.760: DEBUG/dalvikvm(6709): GC freed 8309 objects / 545032 bytes in 36ms 02-04 08:18:39.270: DEBUG/dalvikvm(1129): GC freed 6958 objects / 278352 bytes in 107ms 02-04 08:18:39.970: DEBUG/dalvikvm(22167): GC freed 2915 objects / 560312 bytes in 38ms 02-04 08:18:41.260: DEBUG/dalvikvm(6184): GC freed 373 objects / 26152 bytes in 205ms 02-04 08:18:42.780: DEBUG/dalvikvm(6709): GC freed 7212 objects / 447696 bytes in 36ms 02-04 08:18:43.160: DEBUG/dalvikvm(22167): GC freed 3106 objects / 561824 bytes in 39ms 02-04 08:18:46.310: DEBUG/dalvikvm(22167): GC freed 3110 objects / 564080 bytes in 45ms 02-04 08:18:46.650: DEBUG/dalvikvm(6709): GC freed 7508 objects / 468832 bytes in 36ms 02-04 08:18:48.820: DEBUG/dalvikvm(31712): GC freed 13795 objects / 828232 bytes in 203ms 02-04 08:18:49.040: DEBUG/dalvikvm(1129): GC freed 6918 objects / 276224 bytes in 109ms 02-04 08:18:49.640: DEBUG/dalvikvm(22167): GC freed 2952 objects / 562168 bytes in 37ms 02-04 08:18:50.630: DEBUG/dalvikvm(6709): GC freed 8332 objects / 549680 bytes in 35ms 02-04 08:18:52.770: DEBUG/dalvikvm(22167): GC freed 3108 objects / 563192 bytes in 37ms 02-04 08:18:54.400: DEBUG/dalvikvm(6709): GC freed 7509 objects / 469016 bytes in 35ms 02-04 08:18:55.900: DEBUG/dalvikvm(22167): GC freed 3121 objects / 572920 bytes in 38ms 02-04 08:18:58.150: DEBUG/dalvikvm(6709): GC freed 7408 objects / 465456 bytes in 35ms 02-04 08:18:58.710: DEBUG/dalvikvm(1129): GC freed 6908 objects / 276440 bytes in 107ms 02-04 08:18:59.190: DEBUG/dalvikvm(22167): GC freed 3160 objects / 563144 bytes in 38ms 02-04 08:19:02.080: DEBUG/dalvikvm(6709): GC freed 7436 objects / 468040 bytes in 36ms 02-04 08:19:02.380: DEBUG/dalvikvm(22167): GC freed 3104 objects / 557600 bytes in 39ms 02-04 08:19:05.050: DEBUG/dalvikvm(22167): GC freed 2860 objects / 570072 bytes in 35ms 02-04 08:19:05.810: DEBUG/dalvikvm(6709): GC freed 7508 objects / 469080 bytes in 35ms 02-04 08:19:06.500: DEBUG/skia(22167): --- decoder->decode returned false 02-04 08:19:07.960: DEBUG/dalvikvm(22167): GC freed 2747 objects / 520008 bytes in 36ms 02-04 08:19:08.180: DEBUG/dalvikvm(1129): GC freed 7866 objects / 317304 bytes in 107ms 02-04 08:19:09.540: DEBUG/dalvikvm(6709): GC freed 8220 objects / 539688 bytes in 36ms 02-04 08:19:10.810: DEBUG/dalvikvm(22167): GC freed 2898 objects / 596824 bytes in 37ms 02-04 08:19:13.360: DEBUG/dalvikvm(22167): GC freed 2503 objects / 398936 bytes in 35ms 02-04 08:19:13.370: INFO/dalvikvm-heap(22167): Grow heap (frag case) to 5.029MB for 570264-byte allocation 02-04 08:19:13.400: DEBUG/dalvikvm(22167): GC freed 702 objects / 24976 bytes in 31ms 02-04 08:19:13.400: DEBUG/skia(22167): --- decoder->decode returned false 02-04 08:19:13.540: DEBUG/dalvikvm(6709): GC freed 7481 objects / 466544 bytes in 36ms 02-04 08:19:15.600: DEBUG/WifiService(1129): got ACTION_DEVICE_IDLE 02-04 08:19:15.960: INFO/wpa_supplicant(2522): CTRL-EVENT-DRIVER-STATE STOPPED 02-04 08:19:15.960: VERBOSE/WifiMonitor(1129): Event [CTRL-EVENT-DRIVER-STATE STOPPED] 02-04 08:19:17.270: DEBUG/dalvikvm(22167): GC freed 2372 objects / 1266992 bytes in 36ms 02-04 08:19:17.520: DEBUG/dalvikvm(6709): GC freed 7996 objects / 519128 bytes in 37ms 02-04 08:19:18.150: DEBUG/dalvikvm(1129): GC freed 7110 objects / 285032 bytes in 108ms 02-04 08:19:20.460: DEBUG/dalvikvm(22167): GC freed 3327 objects / 565264 bytes in 36ms 02-04 08:19:21.250: DEBUG/dalvikvm(6709): GC freed 7632 objects / 486024 bytes in 37ms 02-04 08:19:26.470: DEBUG/dalvikvm(31774): GC freed 345 objects / 16160 bytes in 96ms 02-04 08:19:30.423: WARN/System.err(22536): java.lang.Exception: You must call com.mercuryintermedia.productconfiguration.initialize() first 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.ProductConfiguration.getProductName(ProductConfiguration.java:136) 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.api.rest.Item.getPublishingContainersItems(Item.java:15) 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.getContainerFromServer(ContainerHelper.java:68) 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.run(ContainerHelper.java:46) 02-04 08:20:05.280: DEBUG/dalvikvm(1813): GC freed 741 objects / 36840 bytes in 91ms 02-04 08:20:23.580: DEBUG/WifiService(1129): ACTION_BATTERY_CHANGED pluggedType: 2 02-04 08:20:30.423: WARN/System.err(22536): java.lang.Exception: You must call com.mercuryintermedia.productconfiguration.initialize() first 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.ProductConfiguration.getProductName(ProductConfiguration.java:136) 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.api.rest.Item.getPublishingContainersItems(Item.java:15) 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.getContainerFromServer(ContainerHelper.java:68) 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.run(ContainerHelper.java:46) 02-04 08:20:53.970: INFO/FastDormancyManager(1129): Fast Dormant executed. ExecuteCount:2683 NonExecuteCount:25773 I really hope you can help me.

    Read the article

  • Out of memory exception during scrolling of listview

    - by user1761316
    I am using facebook data like postedpicture,profile picture,name,message in my listview.I am getting an OOM error while doing fast scrolling of listview. I am also having scrollviewlistener in my application that loads more data when the scrollbar reaches the bottom of the screen.I just want to know whether I need to change anything in this class. imageLoader.DisplayImage(postimage.get(position).replace(" ", "%20"), postimg) ; I am using the above line to call the method in this imageloader class to set the bitmap to imageview. Here is my imageloader class import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.InputStream; import java.io.OutputStream; import java.net.HttpURLConnection; import java.net.URL; import java.util.Collections; import java.util.Map; import java.util.Stack; import java.util.WeakHashMap; import com.stellent.beerbro.Wall; import android.app.Activity; import android.content.Context; import android.graphics.Bitmap; import android.graphics.Bitmap.Config; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.graphics.Paint; import android.graphics.PorterDuff.Mode; import android.graphics.PorterDuffXfermode; import android.graphics.Rect; import android.graphics.RectF; import android.graphics.drawable.BitmapDrawable; import android.util.Log; import android.widget.ImageView; public class ImageLoader { MemoryCache memoryCache=new MemoryCache(); FileCache fileCache; private Map<ImageView, String> imageViews=Collections.synchronizedMap(new WeakHashMap<ImageView, String>()); public ImageLoader(Context context){ //Make the background thead low priority. This way it will not affect the UI performance photoLoaderThread.setPriority(Thread.NORM_PRIORITY-1); fileCache=new FileCache(context); } // final int stub_id=R.drawable.stub; public void DisplayImage(String url,ImageView imageView) { imageViews.put(imageView, url); System.gc(); // Bitmap bitmap=createScaledBitmap(memoryCache.get(url), 100,100,0); Bitmap bitmap=memoryCache.get(url); // Bitmap bitmaps=bitmap.createScaledBitmap(bitmap, 0, 100, 100); if(bitmap!=null) { imageView.setBackgroundDrawable(new BitmapDrawable(bitmap)); // imageView.setImageBitmap(getRoundedCornerBitmap( bitmap, 10,70,70)); // imageView.setImageBitmap(bitmap); // Log.v("first", "first"); } else { queuePhoto(url, imageView); // Log.v("second", "second"); } } private Bitmap createScaledBitmap(Bitmap bitmap, int i, int j, int k) { // TODO Auto-generated method stub return null; } private void queuePhoto(String url, ImageView imageView) { //This ImageView may be used for other images before. So there may be some old tasks in the queue. We need to discard them. photosQueue.Clean(imageView); PhotoToLoad p=new PhotoToLoad(url, imageView); synchronized(photosQueue.photosToLoad){ photosQueue.photosToLoad.push(p); photosQueue.photosToLoad.notifyAll(); } //start thread if it's not started yet if(photoLoaderThread.getState()==Thread.State.NEW) photoLoaderThread.start(); } public Bitmap getBitmap(String url) { File f=fileCache.getFile(url); //from SD cache Bitmap b = decodeFile(f); if(b!=null) return b; //from web try { Bitmap bitmap=null; URL imageUrl = new URL(url); HttpURLConnection conn = (HttpURLConnection)imageUrl.openConnection(); conn.setConnectTimeout(30000); conn.setReadTimeout(30000); InputStream is=conn.getInputStream(); OutputStream os = new FileOutputStream(f); Utils.CopyStream(is, os); os.close(); bitmap = decodeFile(f); return bitmap; } catch (Exception ex){ ex.printStackTrace(); return null; } }//Lalit //decodes image and scales it to reduce memory consumption private Bitmap decodeFile(File f){ try { //decode image size BitmapFactory.Options o = new BitmapFactory.Options(); o.inJustDecodeBounds = true; BitmapFactory.decodeStream(new FileInputStream(f),null,o); //Find the correct scale value. It should be the power of 2. final int REQUIRED_SIZE=Wall.width; final int REQUIRED_SIZE1=Wall.height; // final int REQUIRED_SIZE=250; // int width_tmp=o.outWidth, height_tmp=o.outHeight; int scale=1; // while(o.outWidth/scale/2>=REQUIRED_SIZE && o.outHeight/scale/2>=REQUIRED_SIZE) //// scale*=2; while(true){ if(width_tmp/2<REQUIRED_SIZE && height_tmp/2<REQUIRED_SIZE1) break; width_tmp/=2; height_tmp/=2; scale*=2; } //decode with inSampleSize BitmapFactory.Options o2 = new BitmapFactory.Options(); o2.inSampleSize=scale; // o2.inSampleSize=2; return BitmapFactory.decodeStream(new FileInputStream(f), null, o2); } catch (FileNotFoundException e) {} return null; } //Task for the queue private class PhotoToLoad { public String url; public ImageView imageView; public PhotoToLoad(String u, ImageView i){ url=u; imageView=i; } } PhotosQueue photosQueue=new PhotosQueue(); public void stopThread() { photoLoaderThread.interrupt(); } //stores list of photos to download class PhotosQueue { private Stack<PhotoToLoad> photosToLoad=new Stack<PhotoToLoad>(); //removes all instances of this ImageView public void Clean(ImageView image) { for(int j=0 ;j<photosToLoad.size();){ if(photosToLoad.get(j).imageView==image) photosToLoad.remove(j); else ++j; } } } class PhotosLoader extends Thread { public void run() { try { while(true) { //thread waits until there are any images to load in the queue if(photosQueue.photosToLoad.size()==0) synchronized(photosQueue.photosToLoad){ photosQueue.photosToLoad.wait(); } if(photosQueue.photosToLoad.size()!=0) { PhotoToLoad photoToLoad; synchronized(photosQueue.photosToLoad){ photoToLoad=photosQueue.photosToLoad.pop(); } Bitmap bmp=getBitmap(photoToLoad.url); memoryCache.put(photoToLoad.url, bmp); String tag=imageViews.get(photoToLoad.imageView); if(tag!=null && tag.equals(photoToLoad.url)){ BitmapDisplayer bd=new BitmapDisplayer(bmp, photoToLoad.imageView); Activity a=(Activity)photoToLoad.imageView.getContext(); a.runOnUiThread(bd); } } if(Thread.interrupted()) break; } } catch (InterruptedException e) { //allow thread to exit } } } PhotosLoader photoLoaderThread=new PhotosLoader(); //Used to display bitmap in the UI thread class BitmapDisplayer implements Runnable { Bitmap bitmap; ImageView imageView; public BitmapDisplayer(Bitmap b, ImageView i){bitmap=b;imageView=i;} public void run() { if(bitmap!=null) imageView.setBackgroundDrawable(new BitmapDrawable(bitmap)); } } public void clearCache() { memoryCache.clear(); fileCache.clear(); } public static Bitmap getRoundedCornerBitmap(Bitmap bitmap, int pixels,int width,int height) { Bitmap output = Bitmap.createBitmap(width,height, Config.ARGB_8888); Canvas canvas = new Canvas(output); final int color = 0xff424242; final Paint paint = new Paint(); final Rect rect = new Rect(0, 0, bitmap.getWidth(), bitmap.getHeight()); final RectF rectF = new RectF(rect); final float roundPx = pixels; paint.setAntiAlias(true); canvas.drawARGB(0, 0, 0, 0); paint.setColor(color); canvas.drawRoundRect(rectF, roundPx, roundPx, paint); paint.setXfermode(new PorterDuffXfermode(Mode.SRC_IN)); canvas.drawBitmap(bitmap, rect, rect, paint); return output; } }

    Read the article

  • Why download popup window in browser not showing up when using JAX-RS v.s. standard servlet?

    - by masato-san
    When I try using standard servlet approach, in my browser the popup window shows up asking me whether to open .xls file or save it. I tried the exactly same code via JAX-RS and the browser popup won't show up somehow. Has anyone encounter this mystery? Standard servlet that works: package local.test.servlet; import java.io.IOException; import java.net.URL; import java.net.URLDecoder; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import local.test.jaxrs.ExcellaTestResource; import org.apache.poi.ss.usermodel.Workbook; import org.bbreak.excella.core.BookData; import org.bbreak.excella.core.exception.ExportException; import org.bbreak.excella.reports.exporter.ExcelExporter; import org.bbreak.excella.reports.exporter.ReportBookExporter; import org.bbreak.excella.reports.model.ConvertConfiguration; import org.bbreak.excella.reports.model.ReportBook; import org.bbreak.excella.reports.model.ReportSheet; import org.bbreak.excella.reports.processor.ReportProcessor; @WebServlet(name="ExcelServlet", urlPatterns={"/ExcelServlet"}) public class ExcelServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { try { //C:\Users\m-takayashiki\.netbeans\6.9\config\GF3\domain1 // ================== ?????? ======================= URL templateFileUrl = ExcellaTestResource.class.getResource("?????????.xls"); // /C:/Users/m-takayashiki/Documents/NetBeansProjects/KogaAlpha/build/web/WEB-INF/classes/local/test/jaxrs/?????????.xls System.out.println(templateFileUrl.getPath()); String templateFilePath = URLDecoder.decode(templateFileUrl.getPath(), "UTF-8"); String outputFileDir = "MasatoExcelHorizontalOutput"; ReportProcessor reportProcessor = new ReportProcessor(); ReportBook outputBook = new ReportBook(templateFilePath, outputFileDir, ExcelExporter.FORMAT_TYPE); ReportSheet outputSheet = new ReportSheet("??????"); outputBook.addReportSheet(outputSheet); // ======================================================== // --------------- ?????? ------------------------- reportProcessor.addReportBookExporter(new OutputStreamExporter(response)); System.out.println("wtf???"); reportProcessor.process(outputBook); System.out.println("done!!"); } catch(Exception e) { System.out.println(e); } } //end doGet() @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } }//end class class OutputStreamExporter extends ReportBookExporter { private HttpServletResponse response; public OutputStreamExporter(HttpServletResponse response) { this.response = response; } @Override public String getExtention() { return null; } @Override public String getFormatType() { return ExcelExporter.FORMAT_TYPE; } @Override public void output(Workbook book, BookData bookdata, ConvertConfiguration configuration) throws ExportException { System.out.println(book.getFirstVisibleTab()); System.out.println(book.getSheetName(0)); //TODO write to stream try { response.setContentType("application/vnd.ms-excel"); response.setHeader("Content-Disposition", "attachment; filename=masatoExample.xls"); book.write(response.getOutputStream()); response.getOutputStream().close(); System.out.println("booya!!"); } catch(Exception e) { System.out.println(e); } } }//end class JAX-RS way that won't display popup: package local.test.jaxrs; import java.net.URL; import java.net.URLDecoder; import javax.servlet.ServletOutputStream; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import javax.ws.rs.core.Context; import javax.ws.rs.core.UriInfo; import javax.ws.rs.Path; import javax.ws.rs.GET; import javax.ws.rs.Produces; import org.apache.poi.ss.usermodel.Workbook; import org.bbreak.excella.core.BookData; import org.bbreak.excella.core.exception.ExportException; import org.bbreak.excella.reports.exporter.ExcelExporter; import org.bbreak.excella.reports.exporter.ReportBookExporter; import org.bbreak.excella.reports.model.ConvertConfiguration; import org.bbreak.excella.reports.model.ReportBook; import org.bbreak.excella.reports.model.ReportSheet; import org.bbreak.excella.reports.processor.ReportProcessor; @Path("excellaTest") public class ExcellaTestResource { @Context private UriInfo context; @Context private HttpServletResponse response; @Context private HttpServletRequest request; public ExcellaTestResource() { } @Path("horizontalProcess") @GET //@Produces("application/vnd.ms-excel") @Produces("application/vnd.ms-excel") public void getProcessHorizontally() { try { //C:\Users\m-takayashiki\.netbeans\6.9\config\GF3\domain1 // ================== ?????? ======================= URL templateFileUrl = this.getClass().getResource("?????????.xls"); // /C:/Users/m-takayashiki/Documents/NetBeansProjects/KogaAlpha/build/web/WEB-INF/classes/local/test/jaxrs/?????????.xls System.out.println(templateFileUrl.getPath()); String templateFilePath = URLDecoder.decode(templateFileUrl.getPath(), "UTF-8"); String outputFileDir = "MasatoExcelHorizontalOutput"; ReportProcessor reportProcessor = new ReportProcessor(); ReportBook outputBook = new ReportBook(templateFilePath, outputFileDir, ExcelExporter.FORMAT_TYPE); ReportSheet outputSheet = new ReportSheet("??????"); outputBook.addReportSheet(outputSheet); // ======================================================== // --------------- ?????? ------------------------- reportProcessor.addReportBookExporter(new OutputStreamExporter(response)); System.out.println("wtf???"); reportProcessor.process(outputBook); System.out.println("done!!"); } catch(Exception e) { System.out.println(e); } //return response; return; } }//end class class OutputStreamExporter extends ReportBookExporter { private HttpServletResponse response; public OutputStreamExporter(HttpServletResponse response) { this.response = response; } @Override public String getExtention() { return null; } @Override public String getFormatType() { return ExcelExporter.FORMAT_TYPE; } @Override public void output(Workbook book, BookData bookdata, ConvertConfiguration configuration) throws ExportException { //TODO write to stream try { response.setContentType("application/vnd.ms-excel"); response.setHeader("Content-Disposition", "attachment; filename=masatoExample.xls"); book.write(response.getOutputStream()); response.getOutputStream().close(); System.out.println("booya!!"); } catch(Exception e) { System.out.println(e); } } }//end class

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Gparted Partition Mount Points Alternate Between 2 Physical Disk Drives

    - by California Ken
    I'm running Ubuntu Server 14.04 on a system with 2 physical disk drives. I am frequently seeing mount errors on startup. When I check the drive partitions using GPARTED, I see that my two "non-system created" data partitions have the wrong disk assignments (i.e. sda1 vs sdb1) or visa-versa. If I hand edit /etc/fstab to match GPARTED, the system will boot error free one time. On the second restart I will get the "serious mount problem" error for the 2 data partitions and when I check GPARTED, the disk assignments have changed again (again, GPARTED and fstab don't match). A listing of my /etc/fstab is: /etc/fstab: static file system information. # Use 'blkid' to print the universally unique identifier for a device; this may be used with UUID= as a more robust way to name devices that works even if disks are added and removed. See fstab(5). # / was on /dev/sdb2 during installation UUID=766a06a4-e5af-484a-adf0-fa1e88da7212 / ext4 errors=remount-ro,user_xattr,acl,barrier=1 0 1 swap was on /dev/sda6 during installation UUID=8c42f835-ead3-43fb-88d8-196f5dfc3aa7 none swap sw 0 0 swap was on /dev/sdb3 during installation UUID=2214deec-ba98-47da-aea7-4e46998f3e57 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sda1 /media/ken/Linux-Data ext3 defaults 0 2 /dev/sda5 /media/ken/Data2 ext4 defaults 0 2 The device designations in the last 2 lines are the ones in question. The fstab entries to NOT change between system restarts but the mount points in the GPARTED display do. Does anyone have a fix for this? Thanks Mr. Young and Mr Gedak, Following is my fstab file and two blkid outputs. The fstab output is correct. The first blkid output was after a reboot and is WRONG! The sda and sdb device partition data is reversed. The 2nd blkid output was after a second reboot (fstab not changed). It shows the sda and adb partition data CORRECTLY. I didn't see any duplicate UUIDs. Does anyone have any idea why the GPARTED and blkid outputs alternate on consecutive reboots? The alternating partition data is real since when the partition assignments are reversed, the boot sequence halts with disk mounting errers (I have to press "s" to skip the mounts). Thanks again. Ken I copied the contents of a text file showing my fstab and 2 blkid outputs. The text file contents show up in the text entry box but does not appear in the main body of the question. Is there a way I can attach the text file or edit this question so that the text is displayed for question viewers?

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Excel Solver vs Solver Foundation

    - by JoshReuben
    I recently read a book http://www.amazon.com/Scientific-Engineering-Cookbook-Cookbooks-OReilly/dp/0596008791/ref=sr_1_1?ie=UTF8&s=books&qid=1296593374&sr=8-1 - the Excel Scientific and Engineering Cookbook.     The 2 main tools that this book leveraged were the Data Analysis Pack and Excel Solver. I had previously been aquanted with Microsoft Solver Foundation - this is a full fledged API for solving optimization problems, and went beyond being a mere Excel plugin - it exposed a C# programmatic interface for in process and a web service interface for out of process integration. were they the same? apparently not!   2 different solver frameworks for Excel: http://www.solver.com/index.html http://www.solverfoundation.com/ I contacted both vendors to get their perspectives.   Heres what the Excel Solver guys had to say:   "The Solver Foundation requires you to learn and use a very specific modeling language (OML). The Excel solver allows you to formulate your optimization problems without learning any new language simply by entering the formulas into cells on the Excel spreadsheet, something that nearly everyone is already familiar with doing.   The Excel Solver also allows you to seamlessly upgrade to products that combine Monte Carlo Simulation capabilities (our Risk Solver Premium and Risk Solver Platform products) which allow you to include uncertainty into your models when appropriate.   Our advanced Excel Solver Products also have a number of built in reporting tools for advanced analysis of the your model and it's results"           And Heres what the Microsoft Solver Foundation guys had to say:   "  With the release of Solver Foundation 3.0, Solver Foundation has the same kinds of solvers (plus a few more) than what is found in Excel Solver. I think there are two main differences:   1.      Problems are described differently. In Excel Solver the goals and constraints are specified inside the spreadsheet, in formulas. In Solver Foundation they are described either in .Net code that uses the Solver Foundation Services API, or using the OML modeling language in Excel. 2.      Solver Foundation’s primary strength is on solving large linear, mixed integer, and constraint models. That is, models that contain arbitrary nonlinear functions (such as trig functions, IF(), powers, etc) are handled a bit better by the Excel Solver at this point. "

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Docky have stopped working since update

    - by Fraekkert
    I'm running Ubuntu 10.10 64-bit. I'm using the Docky ppa, and since the latest update It won't start. If i run it from the terminal, this is what i get: [Info 09:21:19.005] Docky version: 2.1.0 bzr docky r1761 ppa [Info 09:21:19.024] Kernel version: 2.6.35.24 [Info 09:21:19.026] CLR version: 2.0.50727.1433 [Debug 09:21:19.493] [UserArgs] BufferTime = 0 [Debug 09:21:19.494] [UserArgs] MaxSize = 2147483647 [Debug 09:21:19.494] [UserArgs] NetbookMode = False [Debug 09:21:19.494] [UserArgs] NoPollCursor = False [Debug 09:21:19.528] [SystemService] Using org.freedesktop.UPower for battery information [Info 09:21:19.564] [ThemeService] Setting theme: Transparent [Debug 09:21:19.587] [DesktopItemService] Loading remap file '/usr/share/docky/remaps.ini'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'Picasa3.exe' to 'picasa'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'nbexec' to 'netbeans'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'deja-dup-preferences' to 'deja-dup'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'VirtualBox' to 'virtualbox'. [Warn 09:21:19.600] [DesktopItemService] Could not find remap file '/home/lasse/.local/share/docky/remaps.ini'! [Debug 09:21:19.602] [DesktopItemService] Loading desktop item cache '/home/lasse/.cache/docky/docky.desktop.en_DK.utf8.cache'. [Info 09:21:20.101] [DockServices] Dock services initialized. [Debug 09:21:20.134] [DBusManager] DBus Registered: org.gnome.Docky [Debug 09:21:20.142] [DBusManager] DBus Registered: net.launchpad.DockManager Stacktrace: at (wrapper managed-to-native) System.IO.MonoIO.Read (intptr,byte[],int,int,System.IO.MonoIOError&) <IL 0x00012, 0x00062> at (wrapper managed-to-native) System.IO.MonoIO.Read (intptr,byte[],int,int,System.IO.MonoIOError&) <IL 0x00012, 0x00062> at System.IO.FileStream.ReadData (intptr,byte[],int,int) <IL 0x00009, 0x00047> at System.IO.FileStream.RefillBuffer () <IL 0x0001c, 0x0002b> at System.IO.FileStream.ReadByte () <IL 0x00079, 0x000c7> at Mono.Addins.Serialization.BinaryXmlReader.ReadNext () <IL 0x0000b, 0x00031> at Mono.Addins.Serialization.BinaryXmlReader.Skip () <IL 0x0003c, 0x00053> at Mono.Addins.Serialization.BinaryXmlReader.Skip () <IL 0x00047, 0x0005f> at Mono.Addins.Serialization.BinaryXmlReader.Skip () <IL 0x00047, 0x0005f> And this .Skip () continues infinitely, and very fast. I've tried cleaning the cache and reinstalling docky, but without luck.

    Read the article

  • Apache doesn't load .php files

    - by Haddex
    First, sorry for my English and asking something that it's quite answered all over the web. I've read a lot of post about this problem but I still can't find the solution. I'm a web developer who recently moved to Ubuntu from Windows 7. I had a website done (it's online and working) and I set up LAMP to keep working with it. I made a test.php file with: <?php phpinfo(); ?> and put it on /var/www/html directory, it shows all the information about the php and I was really happy: "Ok, it's all done, tomorrow I will work hard" But I placed my whole web into /var/www/html , not in a folder, the index.php is in /var/www/html but guess what: doesn't load any of my .php files, the browser just keep thinking. What I did: I rebooted Apache: /etc/init.d/apache2 restart I tried again with the test.php file and it works fine I put in /var/www/html a .html file and works fine. I looked for /etc/apache2/sites-enable/000-default.conf and it says: DocumentRoot /var/www/html I looked for /etc/apache2/mods-enabled/dir.conf and it says: DirectoryIndex index.html index.cgi index.pl index.php ... Edit* I think it's something related to phpmyadmin, like if I'm not able to connect with the database. But I got nothing on the screen when trying to load the page so...I'm not sure. I can access to the url localhost/phpmyadmin and I edited the connection.php file like this: <?php # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" $hostname_rakstadconnection = "localhost"; $database_rakstadconnection = "rakstadclandb"; $username_rakstadconnection = "root"; $password_rakstadconnection = "admin"; $rakstadconnection = mysql_connect($hostname_rakstadconnection, $username_rakstadconnection, $password_rakstadconnection) or trigger_error(mysql_error(),E_USER_ERROR); mysql_query("SET NAMES 'utf8'"); ?> The name of the database is correct, like the user and password. http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112609_zpsc45ddb72.png http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112120_zps0b9e15f7.png *Edit2: could this be because it's a website that I brought to Linux from Windows? I used Dreamweaver. Edit3: I changed the # to /*/, nothing. The error.log file says: [Mon Jun 09 17:08:13.627881 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Warning: require_once(/var/www/html/Connections/rakstadconnection.php): failed to open stream: Permission denied in /var/www/html/index.php on line 1 [Mon Jun 09 17:08:13.627933 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Fatal error: require_once(): Failed opening required 'Connections/rakstadconnection.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/index.php on line 1 I'm reading error log but...should I add a linux path into a my index.php file? Don't think so. Thanks.

    Read the article

  • Moved to SSD and now getting "the disk drive for / is not ready yet"

    - by dmt0
    I moved my Ubuntu 12.04 install over to an SSD drive. Copied all directories except for the ones most often written to - var, tmp, ... Reinstalled grub into SSD by booting with live CD and following the commands in this post: How to move Ubuntu to an SSD This seemed to work fine, because when I press "e" in grub menu, I see the expected UUIDs. But right after grub I get could not log bootup: Address already in use the disk drive for / is not ready yet or not present. If I skip, I get same for /tmp /run, and other dirs If I go into manual recovery and do mount -n -o remount,rw / it turns out that everything can mount no problem. Can't get my head around this one. My fstab seems right. grub is right. AHCI in bios is enabled. Why is this happening? What can I do to fix it? When I do drop into shell from this error and get to mount things manually, how do I get the OS to continue loading? Thank you guys for any ideas you can give me. Here's what my fstab looks like right now: # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 UUID=67fc8a7a-f1db-485c-88bd-e007c214244f / ext4 defaults,noatime,discard 0 1 # swap was on /dev/sda3 during installation UUID=6bc9cd6c-46b7-43a0-bfac-bd04cc26cfb6 none swap sw 0 0 UUID=7397729b-2125-4b1d-b5eb-28866898d773 /hdd ext4 errors=remount-ro 0 1 /hdd/home /home none bind 0 0 /hdd/run /run none bind 0 0 /hdd/tmp /tmp none bind 0 0 /hdd/var /var none bind 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 output from blkid: /dev/sda1: LABEL="System Reserved" UUID="EABC56C1BC568849" TYPE="ntfs" /dev/sda2: UUID="7CCC6124CC60D9C2" TYPE="ntfs" /dev/sda3: UUID="6bc9cd6c-46b7-43a0-bfac-bd04cc26cfb6" TYPE="swap" /dev/sda5: UUID="7397729b-2125-4b1d-b5eb-28866898d773" TYPE="ext4" /dev/sdb1: UUID="67fc8a7a-f1db-485c-88bd-e007c214244f" TYPE="ext4" relevant from fdisk -l: Device Boot Start End Blocks Id System /dev/sdb1 2048 115345407 57671680 83 Linux

    Read the article

  • A Simple Entity Tagger

    - by Elton Stoneman
    In the REST world, ETags are your gateway to performance boosts by letting clients cache responses. In the non-REST world, you may also want to add an ETag to an entity definition inside a traditional service contract – think of a scenario where a consumer persists its own representation of your entity, and wants to keep it in sync. Rather than load every entity by ID and check for changes, the consumer can send in a set of linked IDs and ETags, and you can return only the entities where the current ETag is different from the consumer’s version.  If your entity is a projection from various sources, you may not have a persistent ETag, so you need an efficient way to generate an ETag which is deterministic, so an entity with the same state always generates the same ETag. I have an implementation for a generic ETag generator on GitHub here: EntityTagger code sample. The essence is simple - we get the entity, serialize it and build a hash from the serialized value. Any changes to either the state or the structure of the entity will result in a different hash. To use it, just call SetETag, passing your populated object and a Func<> which acts as an accessor to the ETag property: EntityTagger.SetETag(user, x => x.ETag); The implementation is all in at 80 lines of code, which is all pretty straightforward: var eTagProperty = AsPropertyInfo(eTagPropertyAccessor); var originalETag = eTagProperty.GetValue(entity, null); try { ResetETag(entity, eTagPropertyAccessor); string json; var serializer = new DataContractJsonSerializer(entity.GetType()); using (var stream = new MemoryStream()) { serializer.WriteObject(stream, entity); json = Encoding.UTF8.GetString(stream.GetBuffer(), 0, (int)stream.Length); } var guid = GetDeterministicGuid(json); eTagProperty.SetValue(entity, guid.ToString(), null); //... There are a couple of helper methods to check if the object has changed since the ETag value was last set, and to reset the ETag. This implementation uses JSON to do the serializing rather than XML. Benefit - should be marginally more efficient as your hashing a much smaller serialized string; downside, JSON doesn't include namespaces or class names at the root level, so if you have two classes with the exact same structure but different names, then instances which have the same content will have the same ETag. You may want that behaviour, but change to use the XML DataContractSerializer if you think that will be an issue. If you can persist the ETag somewhere, it will save you server processing to load up the entity, but that will only apply to scenarios where you can reliably invalidate your ETag (e.g. if you control all the entry points where entity contents can be updated, then you can calculate and persist the new ETag with each update).

    Read the article

  • How can I combine my FTP queries? [migrated]

    - by ryansworld10
    My program has several times where it queries an FTP server to read and upload information. How can I combine all these into one FTP class to handle everything? private static void UploadToFTP(string[] FTPSettings) { try { FtpWebRequest request = (FtpWebRequest)WebRequest.Create(FTPSettings[0]); request.Method = WebRequestMethods.Ftp.MakeDirectory; request.Credentials = new NetworkCredential(FTPSettings[1], FTPSettings[2]); FtpWebResponse response = (FtpWebResponse)request.GetResponse(); } catch { } try { FtpWebRequest request = (FtpWebRequest)WebRequest.Create(FTPSettings[0] + Path.GetFileName(file)); request.Method = WebRequestMethods.Ftp.UploadFile; request.Credentials = new NetworkCredential(FTPSettings[1], FTPSettings[2]); StreamReader source = new StreamReader(file); byte[] fileContents = Encoding.UTF8.GetBytes(source.ReadToEnd()); source.Close(); request.ContentLength = fileContents.Length; Stream requestStream = request.GetRequestStream(); requestStream.Write(fileContents, 0, fileContents.Length); requestStream.Close(); FtpWebResponse response = (FtpWebResponse)request.GetResponse(); response.Close(); RegenLog(); } catch (Exception e) { File.AppendAllText(file, string.Format("{0}{0}Upload Failed - ({2}) - {1}{0}", nl, System.DateTime.Now, e.Message.ToString())); } } private static void CheckBlacklist(string[] FTPSettings) { try { FtpWebRequest request = (FtpWebRequest)WebRequest.Create(FTPSettings[0] + "blacklist.txt"); request.Credentials = new NetworkCredential(FTPSettings[1], FTPSettings[2]); using (WebResponse response = request.GetResponse()) { using (Stream stream = response.GetResponseStream()) { using (TextReader reader = new StreamReader(stream)) { string blacklist = reader.ReadToEnd(); if (blacklist.Contains(Environment.UserName)) { File.AppendAllText(file, string.Format("{0}{0}Logger terminated - ({2}) - {1}{0}", nl, System.DateTime.Now, "Blacklisted")); uninstall = true; } } } } } catch (Exception e) { File.AppendAllText(file, string.Format("{0}{0}FTP Error - ({2}) - {1}{0}", nl, System.DateTime.Now, e.Message.ToString())); } } private static void CheckUpdate(string[] FTPSettings) { try { FtpWebRequest request = (FtpWebRequest)WebRequest.Create(FTPSettings[0] + "update.txt"); request.Credentials = new NetworkCredential(FTPSettings[1], FTPSettings[2]); using (WebResponse response = request.GetResponse()) { using (Stream stream = response.GetResponseStream()) { using (TextReader reader = new StreamReader(stream)) { string newVersion = reader.ReadToEnd(); if (newVersion != version) { update = true; } } } } } catch (Exception e) { File.AppendAllText(file, string.Format("{0}{0}FTP Error - ({2}) - {1}{0}", nl, System.DateTime.Now, e.Message.ToString())); } } I know my code is also a bit inconsistent and messy, however this is my first time working with FTP in C#. Please give any advice you have!

    Read the article

  • Byte array serialization in JSON.NET

    - by Daniel Earwicker
    Given this simple class: class HasBytes { public byte[] Bytes { get; set; } } I can round-trip it through JSON using JSON.NET such that the byte array is base-64 encoded: var bytes = new HasBytes { Bytes = new byte[] { 1, 2, 3, 4 } }; // turn it into a JSON string var json = JsonConvert.SerializeObject(bytes); // get back a new instance of HasBytes var result1 = JsonConvert.DeserializeObject<HasBytes>(json); // all is well Debug.Assert(bytes.Bytes.SequenceEqual(result1.Bytes)); But if I deserialize this-a-wise: var result2 = (HasBytes)new JsonSerializer().Deserialize( new JTokenReader( JToken.ReadFrom(new JsonTextReader( new StringReader(json)))), typeof(HasBytes)); ... it throws an exception, "Expected bytes but got string". What other options/flags/whatever would need to be added to the "complicated" version to make it properly decode the base-64 string to initialize the byte array? Obviously I'd prefer to use the simple version but I'm trying to work with a CouchDB wrapper library called Divan, which sadly uses the complicated version, with the responsibilities for tokenizing/deserializing widely separated, and I want to make the simplest possible patch to how it currently works.

    Read the article

  • Converting a byte array to a X.509 certificate

    - by ddd
    I'm trying to port a piece of Java code into .NET that takes a Base64 encoded string, converts it to a byte array, and then uses it to make a X.509 certificate to get the modulus & exponent for RSA encryption. This is the Java code I'm trying to convert: byte[] externalPublicKey = Base64.decode("base 64 encoded string"); KeyFactory keyFactory = KeyFactory.getInstance("RSA"); EncodedKeySpec publicKeySpec = new X509EncodedKeySpec(externalPublicKey); Key publicKey = keyFactory.generatePublic(publicKeySpec); RSAPublicKey pbrtk = (java.security.interfaces.RSAPublicKey) publicKey; BigInteger modulus = pbrtk.getModulus(); BigInteger pubExp = pbrtk.getPublicExponent(); I've been trying to figure out the best way to convert this into .NET. So far, I've come up with this: byte[] bytes = Convert.FromBase64String("base 64 encoded string"); X509Certificate2 x509 = new X509Certificate2(bytes); RSA rsa = (RSA)x509.PrivateKey; RSAParameters rsaParams = rsa.ExportParameters(false); byte[] modulus = rsaParams.Modulus; byte[] exponent = rsaParams.Exponent; Which to me looks like it should work, but it throws an exception when I use the base 64 encoded string from the Java code to generate the X509 certificate. Is Java's X.509 implementation just incompatible with .NET's, or am I doing something wrong in my conversion from Java to .NET? Or is there simply no conversion from Java to .NET in this case?

    Read the article

  • PDF rendering crashes app Core Graphics

    - by Felixyz
    EDIT: The memory leaks turned out to be unrelated to the crashes. Leaks are fixed but crashes remain, still mysterious. My (iPhone) app does lots of PDF loading and rendering, some of it threaded. Sometime, it seems always after I flush a page cash after getting a memory warning, the app crashes with a bad access when trying to draw a pdf page stored in an NSData object. Here is one example trace: #0 0x3016d564 in CGPDFResourcesGetResource () #1 0x3016d58a in CGPDFResourcesGetResource () #2 0x3016d94e in CGPDFResourcesGetExtGState () #3 0x3015fac4 in CGPDFContentStreamGetExtGState () #4 0x301629a8 in op_gs () #5 0x3016df12 in handle_xname () #6 0x3016dd9e in read_objects () #7 0x3016de6c in CGPDFScannerScan () #8 0x30161e34 in CGPDFDrawingContextDraw () #9 0x3016a9dc in CGContextDrawPDFPage () But sometimes I get this instead: Program received signal: “EXC_BAD_ACCESS”. (gdb) bt #0 0x335625fa in objc_msgSend () #1 0x32c04eba in CFDictionaryGetValue () #2 0x3016d500 in get_value () #3 0x3016d5d6 in CGPDFResourcesGetFont () #4 0x3015fbb4 in CGPDFContentStreamGetFont () #5 0x30163480 in op_Tf () #6 0x3016df12 in handle_xname () #7 0x3016dd9e in read_objects () #8 0x3016de6c in CGPDFScannerScan () #9 0x30161e34 in CGPDFDrawingContextDraw () #10 0x3016a9dc in CGContextDrawPDFPage () Is this an indication that I've mistakenly deallocated an object? It's hard for me to decode what's happening here. This is how I create and retain the various objects involved: // Some data was just loaded from the network and is pointed to by "data" self.pdfData = data; _dataProviderRef = CGDataProviderCreateWithData( NULL, [_pdfData bytes], [_pdfData length], NULL ); _documentRef = CGPDFDocumentCreateWithProvider(_dataProviderRef); _pageRef = CGPDFDocumentGetPage(_documentRef, 1); CGPDFPageRetain(_pageRef); _pdfFrame = CGPDFPageGetBoxRect(_pageRef, kCGPDFArtBox); So the NSData object is retained, and I explicitly retain the page reference. The data provider and the document are already retained by the create-functions. And here is my dealloc method: -(void)dealloc { if (_pageRef) CGPDFPageRelease(_pageRef); if (_documentRef) CGPDFDocumentRelease(_documentRef); if (_dataProviderRef) CGDataProviderRelease(_dataProviderRef); self.pdfData = nil; [super dealloc]; } Am I doing anything wrong? Even an assurance that I'm not, with explanation, would be a help.

    Read the article

  • Extjs Dynamic Grid

    - by rkenshin
    Hi, I'm trying to create a dynamic grid using Extjs. The grid is built and displayed when a click event is fired then an ajax request is sent to the server to fetch the columns, records and records definition a.k.a Store Fields. Each node could have different grid structure and that depends on the level of the node in the tree. The only way i came up with so far is function showGrid(response, request) { var jsonData = Ext.util.JSON.decode(response.responseText); var grid = Ext.getCmp('contentGrid'+request.params.owner); if(grid) { grid.destroy(); } var store = new Ext.data.ArrayStore({ id : 'arrayStore', fields : jsonData.recordFields, autoDestroy : true }); grid = new Ext.grid.GridPanel({ defaults: {sortable:true}, id:'contentGrid'+request.params.owner, store: store, columns: jsonData.columns, //width:540, //height:200, loadMask: true }); store.loadData(jsonData.records); if(Ext.getCmp('tab-'+request.params.owner)) { Ext.getCmp('tab-'+request.params.owner).show(); } else { grid.render('grid-div'); Ext.getCmp('card-tabs-panel').add({ id:'tab-'+request.params.owner, title: request.params.text, iconCls:'silk-tab', html:Ext.getDom('grid-div').innerHTML, closable:true }).show(); } } The function above is called when a click event is fired 'click': function(node) { Ext.Ajax.request({ url: 'showCtn', success: function(response, request) { alert('Success'); showGrid(response,request); }, failure: function(results, request) { alert('Error'); }, params: Ext.urlDecode(node.attributes.options); } The problem i'm getting with this code is that a new grid is displayed each time the showGrid function is called. The end user sees the old grids and the new one. To mitigate this problem, I tried destroying the grid and also removing the grid element on each request, and that seems to solve the problem only that records never get displayed this time. if(grid) { grid.destroy(true); } The behavior i'm looking for is to display the result of a grid within a tab and if that tab exists replaced the old grid. Any help is appreciated. Thank you

    Read the article

  • ssh tunnel - bind: Cannot assign requested address

    - by JosephK
    Trying to create a socks (-D) ssh tunnel - Linux box to Linux box (both centos): sshd running on remote side ok. From local machine we do / see this: ssh -D 1080 [email protected]. [email protected]'s password: bind: Cannot assign requested address (where 8.8.8.8 is really my server's IP and 'user' is my real username) I am logged into the remote side in this terminal-window. I can verify that the local port was unused prior to this command, and then used by an ssh process, after the command, via: netstat -lnp | grep 1080 So, unlike most googled-responses with this error, the problem would not seem to be the loopback interface assignment. If I try to use this tunnel with a mail client, the local-side permits the attempt (no 'proxy-failed' error), but no data / reply is returned. On the remote side, I do have "PermitTunnel yes" in my sshd_config (though 'yes' should be the default, anyway). Ideas or Clues? Here is the relevant debug-output OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * .... debug1: Authentication succeeded (password). debug1: Local connections to LOCALHOST:1080 forwarded to remote address socks:0 debug1: Local forwarding listening on 127.0.0.1 port 1080. debug1: channel 0: new [port listener] debug1: Local forwarding listening on ::1 port 1080. bind: Cannot assign requested address debug1: channel 1: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.utf8 Other clue: If I run a Virtual Box on the client running Windows, open a tunnel with putty in that box, that tunnel, to the same remote server, works.

    Read the article

  • Does anyone use Fortify 360 with Classic ASP? a Header Manipulation vulnerability story

    - by j_green71
    Good morning, everyone. I'm on a short-term contracting gig, trying to patch some vulnerabilities in their legacy code. The application I'm working on is a combination of Classic ASP(VBScript) and .Net 2.0 (C#). One of the tools they have purchased is Fortify 360. Let's say that this is a current classic ASP page in the application: <%@ Language=VBScript %> <% Dim var var = Request.QueryString("var") ' do stuff Response.Redirect "nextpage.asp?var=" & var %> I know, I know, short and very dangerous. So we wrote some (en/de)coders and validation/verification routines: <%@ Language=VBScript %> <% Dim var var = Decode(Request.QueryString("var")) ' do stuff if isValid(var) then Response.Redirect "nextpage.asp?var=" & Encode(var) else 'throw error page end if %> And still Fortify flags this as vulnerable to Header Manipulation. How or what exactly is Fortify looking for? The reason I suspect that Fortify is looking for specific key words is that on the .Net side of things, I can include the Microsoft AntiXss assembly and call functions such as GetSafeHtmlFragment and UrlEncode and Fortify is happy. Any advice?

    Read the article

  • WCF net.tcp bindings, message formats and security questions

    - by RemotecUk
    Hi, sorry for the stupid questions but there are just some things about WCF I cant get my head around. Would be greatful for some advice on the following.... At a very basic level is it correct that WCF uses either Binary (Net.Tcp), HTTP or MSMQ to transfer my message on the wire? However is it true that in all cases, regardless of how the data is transferred the message itself in in the SOAP format with headers and a body? So its a sort of XML message that is transmitted in either HTTP/S or in a binary format. Is Net.Tcp a good choice for my client server app - its similar to a messenger app in that the clients are all remote users on the other side of the firewall to my server. Most things I am reading are telling to use WS* and HTTP. Is Net.Tcp secured by standard and without certificates? - that is - people cannot listen on the wire and decode the data thats going to and from. Is it possible to send a username and password using net.tcp and without an installed certificate? If so I presume I can hook this up to my membership provider and authenticate access to each method on my service contract implementation. I presume that with username and password security, the proxy is initialised with the username and password and that this information is is sent with every request. Then my membership provider will be invoked for each method call and do whatever it needs to do to get the authorisation for the method. Sorry for the dump of questions but would be great to know if Im thinking the right way about how WCF works. Thanks.

    Read the article

  • CGImageCreateWithMask with an image as a mask

    - by Peyman
    Hi I am trying to use an image (painted as Core Graphics) to create a mask. What I am doing is this 1. creating a Core Graphics path CGContextSaveGState(context); CGContextBeginPath(context); CGContextMoveToPoint(context,circleCenter.x,circleCenter.y); //CGContextSetAllowsAntialiasing(myBitmapContext, YES); CGContextAddArc(context,circleCenter.x, circleCenter.y,circleRadius,startingAngle, endingAngle, 0); // 0 is counterclockwise CGContextClosePath(context); CGContextSetRGBStrokeColor(context,1.0,0.0,0.0,1.0); CGContextSetRGBFillColor(context,1.0,0.0,0.0,0.2); CGContextDrawPath(context, kCGPathFillStroke); 2. then I'm saving the context that has the path just painted CGImageRef pacmanImage = CGBitmapContextCreateImage (context); 3. restoring the context CGContextRestoreGState(context); CGContextSaveGState(context); 4. creating a 1 bit mask (which will provide the black-white mask) bitsPerComponent = 1; bitsPerPixel = bitsPerComponent * 1 ; bytesPerRow = (CGImageGetWidth(imgToMaskRef) * bitsPerPixel); mask = CGImageCreate(CGImageGetWidth(imgToMaskRef), CGImageGetHeight(imgToMaskRef), bitsPerComponent, bitsPerPixel, bytesPerRow, greyColorSpace, kCGImageAlphaNone, CGImageGetDataProvider(pacmanImage), NULL, //decode YES, //shouldInterpolate kCGRenderingIntentDefault); 5. masking the imgToMaskRef (which is a CGImageRef imgToMaskRef =imgToMask.CGImage;) with the mask just created imageMaskedWithImage = CGImageCreateWithMask(imgToMaskRef, mask); CGContextDrawImage(context,imgRectBox, imageMaskedWithImage); CGImageRef maskedImageFinal = CGBitmapContextCreateImage (context); returning the maskedImageFinal to the caller of this method (as wheelChoiceMadeState, which is a CGImageRef) who then updates the CALayer contents property with the image theLayer.contents = (id) wheelChoiceMadeState; the problem I am seeing is that the mask does not work properly and looks very strange indeed. I get strange patterns across the path painted by the Core Graphics. My hunch is there is something with CGImageGetDataProvider() but I am not sure. Any help would be appreciated thank you

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >