Search Results

Search found 4900 results on 196 pages for 'upload'.

Page 185/196 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • meteor mongodb _id changing after insert (and UUID property as well)

    - by lommaj
    I have meteor method that does an insert. Im using Regulate.js for form validation. I set the game_id field to Meteor.uuid() to create a unique value that I also route to /game_show/:game_id using iron router. As you can see I'm logging the details of the game, this works fine. (image link to log below) Meteor.methods({ create_game_form : function(data){ Regulate.create_game_form.validate(data, function (error, data) { if (error) { console.log('Server side validation failed.'); } else { console.log('Server side validation passed!'); // Save data to database or whatever... //console.log(data[0].value); var new_game = { game_id: Meteor.uuid(), name : data[0].value, game_type: data[1].value, creator_user_id: Meteor.userId(), user_name: Meteor.user().profile.name, created: new Date() }; console.log("NEW GAME BEFORE INSERT: ", new_game); GamesData.insert(new_game, function(error, new_id){ console.log("GAMES NEW MONGO ID: ", new_id) var game_data = GamesData.findOne({_id: new_id}); console.log('NEW GAME AFTER INSERT: ', game_data); Session.set('CURRENT_GAME', game_data); }); } }); } }); All of the data coming out of the console.log at this point works fine After this method call the client routes to /game_show/:game_id Meteor.call('create_game_form', data, function(error){ if(error){ return alert(error.reason); } //console.log("post insert data for routing variable " ,data); var created_game = Session.get('CURRENT_GAME'); console.log("Session Game ", created_game); Router.go('game_show', {game_id: created_game.game_id}); }); On this view, I try to load the document with the game_id I just inserted Template.game_start.helpers({ game_info: function(){ console.log(this.game_id); var game_data = GamesData.find({game_id: this.game_id}); console.log("trying to load via UUID ", game_data); return game_data; } }); sorry cant upload images... :-( https://www.evernote.com/shard/s21/sh/c07e8047-de93-4d08-9dc7-dae51668bdec/a8baf89a09e55f8902549e79f136fd45 As you can see from the image of the console log below, everything matches the id logged before insert the id logged in the insert callback using findOne() the id passed in the url However the mongo ID and the UUID I inserted ARE NOT THERE, the only document in there has all the other fields matching except those two! Not sure what im doing wrong. Thanks!

    Read the article

  • Android Google Analytics

    - by ibenot
    I'm trying to use Google Analytics in my Android application with Google Configuration Add .jar in my project Insert this in AndroidManifest Add this in my java file public class MainActivity extends Activity { GoogleAnalyticsTracker tracker; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); tracker = GoogleAnalyticsTracker.getInstance(); tracker.startNewSession("My-UA–XXXXXXXX", this); setContentView(R.layout.main); Button createEventButton = (Button)findViewById(R.id.NewEventButton); createEventButton.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { tracker.trackEvent( "Clicks", // Category "Button", // Action "clicked", // Label 77); // Value } }); setContentView(R.layout.main); Button createPageButton = (Button)findViewById(R.id.NewPageButton); createPageButton.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { // Add a Custom Variable to this pageview, with name of "Medium" and value "MobileApp" and // scope of session-level. tracker.setCustomVar(1, "Navigation Type", "Button click", 2); // Track a page view. This is probably the best way to track which parts of your application // are being used. // E.g. // tracker.trackPageView("/help"); to track someone looking at the help screen. // tracker.trackPageView("/level2"); to track someone reaching level 2 in a game. // tracker.trackPageView("/uploadScreen"); to track someone using an upload screen. tracker.trackPageView("/testApplicationHomeScreen"); } }); Button quitButton = (Button)findViewById(R.id.QuitButton); quitButton.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { finish(); } }); Button dispatchButton = (Button)findViewById(R.id.DispatchButton); dispatchButton.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { // Manually start a dispatch, not needed if the tracker was started with a dispatch // interval. tracker.dispatch(); } }); } @Override protected void onDestroy() { super.onDestroy(); // Stop the tracker when it is no longer needed. tracker.stopSession(); } } == And it's ok, no error, compiling and executing but i have created my ua account yesterday (more 24h) and i have nothing in my google analytics panel. My Question : is there an error in my code or i want to wait again ? Live trafic works for Android application (like tradicional website) ??? I have no information about Live trafic (when i play my app, i would like to show the number of person using my application) and Saved trafic (with viewed pages, time) Thank you for your replies and excuse my poor english :) bye

    Read the article

  • Error:A generic error occurred in GDI+

    - by sanfra1983
    Hi, I created a web project on the server, and when I upload an image shows me the error Error: A generic error occurred in GDI +. I have read many links on the net that talk about this issue, and although I made the changes, nothing went wrong. I was thinking if the case is not an issue of permissions to folders. In fact I have two folders one inside the other. This is the code to resize the image: public Bitmaps ResizeImage (Stream stream, int? width, int? height) ( System.Drawing.Bitmap bmpOut = null; const int defaultWidth = 800; const int defaultHeight = 600; int width = lnWidth == null? defaultWidth: (int) width; int height = lnHeight == null? defaultHeight: (int) height; try ( LoBMP bitmap = new Bitmap (stream); ImageFormat loFormat = loBMP.RawFormat; decimal lnRatio; lnNewWidth int = 0; lnNewHeight int = 0; if (loBMP.Width <& & lnWidth loBMP.Height <lnHeight) ( loBMP return; ) if (loBMP.Width> loBMP.Height) ( lnRatio = (decimal) lnWidth / loBMP.Width; lnNewWidth = lnWidth; decimal = lnTemp loBMP.Height lnRatio *; lnNewHeight = (int) lnTemp; ) else ( lnRatio = (decimal) lnHeight / loBMP.Height; lnNewHeight = lnHeight; decimal = lnTemp loBMP.Width lnRatio *; lnNewWidth = (int) lnTemp; ) bmpOut = new Bitmap (lnNewWidth, lnNewHeight); Graphics g = Graphics.FromImage (bmpOut); g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; g.FillRectangle (Brushes.White, 0, 0, lnNewWidth, lnNewHeight); g.DrawImage (loBMP, 0, 0, lnNewWidth, lnNewHeight); loBMP.Dispose (); ) catch ( return null; ) bmpOut return; ) and this is the code that I insert in the codebehind: string filepath = AppDomain.CurrentDomain.BaseDirectory + img_veterinario / "; string = filepathM AppDomain.CurrentDomain.BaseDirectory + img_veterinario / img_veterinarioM; Reseize Reseize R = new (); Bitmap = photosFileOriginal r.ResizeImage (fucasiclinici.PostedFile.InputStream, 400, 400); Bitmap = photosFileMiniatura r.ResizeImage (fucasiclinici.PostedFile.InputStream, 72, 72); String filename = Path.GetFileName (fucasiclinici.PostedFile.FileName); photosFileOriginal.Save (Path.Combine (filepath, filename)); photosFileMiniatura.Save (Path.Combine (filepathM, filename)); Can you help me? Thanks

    Read the article

  • Using ImageMagick to create an image from a PDF...efficiently

    - by bigsweater
    I'm using ImageMagick to create a tiny JPG thumbnail image of an already-uploaded PDF. The code works fine. It's a WordPress widget, though this isn't necessarily WordPress specific. I'm unfamiliar with ImageMagick, so I was hoping somebody could tell me if this looks terrible or isn't following some best practices of some sort, or if I'm risking crashing the server. My questions, specifically, are: Is that image cached, or does the server have to re-generate the image every time somebody views the page? If it isn't cached, what's the best way to make sure the server doesn't have to regenerate the thumbnail? I tried to create a separate folder (/thumbs) for ImageMagick to put all the images in, instead of cluttering up the WP upload folders with images of PDFs. It kept throwing a permission error, despite 777 permissions on the folder in my testing environment. Why? Do the source/destination directories have to be the same? Am I doing anything incorrectly/inefficiently here that needs to be improved? The whole widget is on Pastebin: http://pastebin.com/WnSTEDm7 Relevant code: <?php if ( $url ) { $pdf = $url; $info = pathinfo($pdf); $filename = basename($pdf,'.'.$info['extension']); $uploads = wp_upload_dir(); $file_path = str_replace( $uploads['baseurl'], $uploads['basedir'], $url ); $dest_path = str_replace( '.pdf', '.jpg', $file_path ); $dest_url = str_replace( '.pdf', '.jpg', $pdf ); exec("convert \"{$file_path}[0]\" -colorspace RGB -geometry 60 $dest_path"); ?> <div class="entry"> <div class="widgetImg"> <p><a href="<?php echo $url; ?>" title="<?php echo $filename; ?>"><?php echo "<img src='".$dest_url."' alt='".$filename."' class='blueBorder' />"; ?></a></p> </div> <div class="widgetText"> <?php echo wpautop( $desc ); ?> <p><a class="downloadLink" href="<?php echo $url; ?>" title="<?php echo $filename; ?>">Download</a></p> </div> </div> <?php } ?> As you can see, the widget grabs whatever PDF is attached to the current page being viewed, creates an image of the first page of the PDF, stores it, then links to it in HTML. Thanks for any and all help!

    Read the article

  • How to give the First image in a gallery a different class than the rest of the images - Carrierwave

    - by ChrisBedoya
    I have a model called "Photo" that belongs to a model called "Shoe". I using Carrierwave to upload multiple images. index.html.erb <% shoes.each do |shoe| %> <div class="shoe"> <div class="gallery"> <% shoe.photos.each do |photo| %> <%= link_to image_tag(photo.photo_file.url(:thumb).to_s), photo.photo_file.url.to_s, :class => 'fancybox', :rel => 'gallery' %> <% end %> </div> </div> <% end %> Outputs this: <div class="shoe"> <div class="gallery"> <a class="fancybox" href="../nike-kd-6-meteorology-2.jpg" rel="gallery"> <img src="../thumb_nike-kd-6-meteorology-2.jpg"> </a> <a class="fancybox" href="../nike-kd-6-meteorology-2.jpg" rel="gallery"> <img src="../thumb_nike-kd-6-meteorology-2.jpg"> </a> <a class="fancybox" href="../nike-kd-6-meteorology-2.jpg" rel="gallery"> <img src="../thumb_nike-kd-6-meteorology-2.jpg"> </a> </div> </div> But I want the first image of each gallery to be able to have its own class and the rest of the images to have their own class. Something like this: <a class="firstclass" href="../nike-kd-6-meteorology-2.jpg" rel="gallery"> <img src="../thumb_nike-kd-6-meteorology-2.jpg"> </a> <a class="fancybox" href="../nike-kd-6-meteorology-2.jpg" rel="gallery"> <img src="../thumb_nike-kd-6-meteorology-2.jpg"> </a> <a class="fancybox" href="../nike-kd-6-meteorology-2.jpg" rel="gallery"> <img src="../thumb_nike-kd-6-meteorology-2.jpg"> </a> How can I do this? Also I want each gallery to have its own unique id but when I try to add this: :rel => 'gallery<%= shoe.id %>' I get a Syntax error. Thanks.

    Read the article

  • how to make DIV show up on top of other Divs

    - by feelexit
    I upload my code on jsFiddle, you can see it there. http://jsfiddle.net/SrT2U/2/ When you click on the link, the hidden FAQ section will show up, and it will push other divs down. But that is not what I want, I need all other divs stay where they are, and my hidden FAQ section just float on the top. Not sure how to do it. Not even sure if this should be done in HTML, CSS or jQuery. My jQuery code: $(function(){ $(".OpenTopMessage").click(function () { $("#details").slideToggle("slow"); }); }); HTML code: <div style="border: 1px solid #000;"> <span>link</span> <span>&nbsp;&nbsp;|&nbsp;&nbsp;</span> <span>link</span> <span>&nbsp;&nbsp;|&nbsp;&nbsp;</span> <span>link</span> <span>&nbsp;&nbsp;|&nbsp;&nbsp;</span> <span>link</span> <span>&nbsp;&nbsp;|&nbsp;&nbsp;</span> <span>link</span> <span>&nbsp;&nbsp;|&nbsp;&nbsp;</span> </div> <div id="faqBox"> <table width="100%"> <tr><td><a href="#" id="openFAQ" class="OpenTopMessage">this is hte faq section</a></td> </tr> </table> <div id="details" style="display:none"> <br/><br/><br/><br/><br/><br/><br/><br/> the display style property is set to none to ensure that the element no longer affects the layout of the page <br/><br/><br/><br/><br/><br/><br/><br/> </div> </div> <br/><br/> <div style="background:#c00;">other stuff heren the height reaches 0 after a hiding animation, the display style property is set to </div>

    Read the article

  • How to Transfer Large File from MS Word Add-In (VBA) to Web Server?

    - by Ian Robinson
    Overview I have a Microsoft Word Add-In, written in VBA (Visual Basic for Applications), that compresses a document and all of it's related contents (embedded media) into a zip archive. After creating the zip archive it then turns the file into a byte array and posts it to an ASMX web service. This mostly works. Issues The main issue I have is transferring large files to the web site. I can successfully upload a file that is around 40MB, but not one that is 140MB (timeout/general failure). A secondary issue is that building the byte array in the VBScript Word Add-In can fail by running out of memory on the client machine if the zip archive is too large. Potential Solutions I am considering the following options and am looking for feedback on either option or any other suggestions. Option One Opening a file stream on the client (MS Word VBA) and reading one "chunk" at a time and transmitting to ASMX web service which assembles the "chunks" into a file on the server. This has the benefit of not adding any additional dependencies or components to the application, I would only be modifying existing functionality. (Fewer dependencies is better as this solution should work in a variety of server environments and be relatively easy to set up.) Question: Are there examples of doing this or any recommended techniques (either on the client in VBA or in the web service in C#/VB.NET)? Option Two I understand WCF may provide a solution to the issue of transferring large files by "chunking" or streaming data. However, I am not very familiar with WCF, and am not sure what exactly it is capable of or if I can communicate with a WCF service from VBA. This has the downside of adding another dependency (.NET 3.0). But if using WCF is definitely a better solution I may not mind taking that dependency. Questions: Does WCF reliably support large file transfers of this nature? If so, what does this involve? Any resources or examples? Are you able to call a WCF service from VBA? Any examples?

    Read the article

  • How do I write object classes effectively when dealing with table joins?

    - by Chris
    I should start by saying I'm not now, nor do I have any delusions I'll ever be a professional programmer so most of my skills have been learned from experience very much as a hobby. I learned PHP as it seemed a good simple introduction in certain areas and it allowed me to design simple web applications. When I learned about objects, classes etc the tutor's basic examnples covered the idea that as a rule of thumb each database table should have its own class. While that worked well for the photo gallery project we wrote, as it had very simple mysql queries, it's not working so well now my projects are getting more complex. If I require data from two separate tables which require a table join I've instead been ignoring the class altogether and handling it on a case by case basis, OR, even worse been combining some of the data into the class and the rest as a separate entity and doing two queries, which to me seems inefficient. As an example, when viewing content on a forum I wrote, if you view a thread, I retrieve data from the threads table, the posts table and the user table. The queries from the user and posts table are retrieved via a join and not instantiated as an object, whereas the thread data is called using my Threads class. So how do I get from my current state of affairs to something a little less 'stupid', for want of a better word. Right now I have a DB class that deals with connection and escaping values etc, a parent db query class that deals with the common queries and methods, and all of the other classes (Thread, Upload, Session, Photo and ones thats aren't used Post, User etc ) are children of that. Do I make a big posts class that has the relevant extra attributes that I retrieve from the users (and potentially threads) table? Do I have separate classes that populate each of their relevant attributes with a single query? If so how do I do that? Because of the way my classes are written, based on what I was taught, my db update row method, or insert method both just take the attributes as an array and update all of that, if I have extra attributes from other db tables in each class then how do I rewrite those methods as obbiously updating automatically like that would result in errors? In short I think my understanding is limited right now and I'd like some pointers when it comes to the fundamentals of how to write more complex classes.

    Read the article

  • assign a model's attribute through association

    - by justcode
    I'm new to rails and working on a rails app and I'm stuck pondering this issue. I have three models class product < ActiveRecord::Base attr_accessible :name, :issn, :category validates_presence_of :name, :issn, :category validates_numericality_of :issn, :message => "has to be a number" has_many :user_products has_many :users, :through => :user_products end class UserProduct < ActiveRecord::Base attr_accessible :price, :category validates_presence_of :price, :category validates_numericality_of :price, :message = "has to be a number" belongs_to :user belongs_to :product end class user < ActiveRecord::Base # devise authentication here # Setup accessible (or protected) attributes for your model attr_accessible :email, :password, :password_confirmation, :remember_me has_many :user_products has_many :products, :through = :user_products end here is my new.html.erb <div class="MainBodyWrapper"> <div class="span8"> <div id="listBoxWrapper"> <fieldset> <%= form_for(@product, :html => { :class => "form-inline" }, :style => "margin-bottom: 60px" ) do |f| %> <div class="control-group"> <label class="control-label" for="name">name</label> <div class="controls"> <%= f.text_field :price, :class => 'input-xlarge input-name', :id => "name" %> </div> </div> <div class="listingButtons"> <button class="btn btn-info"></i>Add</button> <a class="btn">Upload Pictures (anytime)</a> </div> </fieldset> </div> </div> There are reasons why I want to setup the models this way. So the question is this: I want the user to enter the info for the product in the form but it also involves putting in the price of the product which exists in a different model/table (user_product) that is associated with product. How can I do this? You can see that my form_for uses @product. Any help will be appreciated.

    Read the article

  • Attach file to mail using php

    - by ktsixit
    Hi all, I've created a form which contains an upload field file and some other text fields. I'm using php to send the form's data via email and attach the file. This is the code I'm using but it's not working properly. The file is normally attached to the message but the rest of the data is not sent. $body="bla bla bla"; $attachment = $_FILES['cv']['tmp_name']; $attachment_name = $_FILES['cv']['name']; if (is_uploaded_file($attachment)) { $fp = fopen($attachment, "rb"); $data = fread($fp, filesize($attachment)); $data = chunk_split(base64_encode($data)); fclose($fp); } $headers = "From: $email<$email>\n"; $headers .= "Reply-To: <$email>\n"; $headers .= "MIME-Version: 1.0\n"; $headers .= "Content-Type: multipart/related; type=\"multipart/alternative\"; boundary=\"----=MIME_BOUNDRY_main_message\"\n"; $headers .= "X-Sender: $first_name $family_name<$email>\n"; $headers .= "X-Mailer: PHP4\n"; $headers .= "X-Priority: 3\n"; $headers .= "Return-Path: <$email>\n"; $headers .= "This is a multi-part message in MIME format.\n"; $headers .= "------=MIME_BOUNDRY_main_message \n"; $headers .= "Content-Type: multipart/alternative; boundary=\"----=MIME_BOUNDRY_message_parts\"\n"; $message = "------=MIME_BOUNDRY_message_parts\n"; $message .= "Content-Type: text/html; charset=\"utf-8\"\n"; $message .= "Content-Transfer-Encoding: quoted-printable\n"; $message .= "\n"; $message .= "$body\n"; $message .= "\n"; $message .= "------=MIME_BOUNDRY_message_parts--\n"; $message .= "\n"; $message .= "------=MIME_BOUNDRY_main_message\n"; $message .= "Content-Type: application/octet-stream;\n\tname=\"" . $attachment_name . "\"\n"; $message .= "Content-Transfer-Encoding: base64\n"; $message .= "Content-Disposition: attachment;\n\tfilename=\"" . $attachment_name . "\"\n\n"; $message .= $data; //The base64 encoded message $message .= "\n"; $message .= "------=MIME_BOUNDRY_main_message--\n"; $subject = 'bla bla bla'; $to="[email protected]"; mail($to,$subject,$message,$headers); Why isn't the $body data not sent? Can you help me fix it?

    Read the article

  • Making an efficient algorithm

    - by James P.
    Here's my recent submission for the FB programming contest (qualifying round only requires to upload program output so source code doesn't matter). The objective is to find two squares that add up to a given value. I've left it as it is as an example. It does the job but is too slow for my liking. Here's the points that are obviously eating up time: List of squares is being recalculated for each call of getNumOfDoubleSquares(). This could be precalculated or extended when needed. Both squares are being checked for when it is only necessary to check for one (complements). There might be a more efficient way than a double-nested loop to find pairs. Other suggestions? Besides this particular problem, what do you look for when optimizing an algorithm? public static int getNumOfDoubleSquares( Integer target ){ int num = 0; ArrayList<Integer> squares = new ArrayList<Integer>(); ArrayList<Integer> found = new ArrayList<Integer>(); int squareValue = 0; for( int j=0; squareValue<=target; j++ ){ squares.add(j, squareValue); squareValue = (int)Math.pow(j+1,2); } int squareSum = 0; System.out.println( "Target=" + target ); for( int i = 0; i < squares.size(); i++ ){ int square1 = squares.get(i); for( int j = 0; j < squares.size(); j++ ){ int square2 = squares.get(j); squareSum = square1 + square2; if( squareSum == target && !found.contains( square1 ) && !found.contains( square2 ) ){ found.add(square1); found.add(square2); System.out.println( "Found !" + square1 +"+"+ square2 +"="+ squareSum); num++; } } } return num; }

    Read the article

  • An error occurred creating the form. See Exception.InnerException for details. The error is: Object

    - by Ben
    I get this error when attempting to debug my form, I cannot see where at all the error could be (also does not highlight where), anyone have any suggestions? An error occurred creating the form. See Exception.InnerException for details. The error is: Object reference not set to an instance of an object. Public Class Form1 Dim dateCrap As String = "Date:" Dim IPcrap As String = "Ip:" Dim pcCrap As String = "Computer:" Dim programCrap As String = "Program:" Dim textz As String = TextBox1.Text Dim sep() As String = {vbNewLine & vbNewLine} Dim sections() As String = Text.Split(sep, StringSplitOptions.None) Dim NewArray() As String = TextBox1.Text.Split(vbNewLine) Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click For i = 0 To sections.Count - 1 TextBox2.Text = sections(i) Dim text2 As String = TextBox2.Text Dim sep2() As String = {vbNewLine & vbNewLine} Dim sections2() As String = Text.Split(sep, StringSplitOptions.None) Dim FTPinfo() As String = TextBox2.Text.Split(vbNewLine) Dim clsRequest As System.Net.FtpWebRequest = _ DirectCast(System.Net.WebRequest.Create(sections2(0).Replace("Url/Host:", "")), System.Net.FtpWebRequest) clsRequest.Credentials = New System.Net.NetworkCredential(sections2(1).Replace("Login:", ""), (sections2(2).Replace("Password:", ""))) clsRequest.Method = System.Net.WebRequestMethods.Ftp.UploadFile ' read in file... Dim bFile() As Byte = System.IO.File.ReadAllBytes(txtShellDir.Text) ' upload file... Dim clsStream As System.IO.Stream = clsRequest.GetRequestStream() clsStream.Write(bFile, 0, bFile.Length) clsStream.Close() clsStream.Dispose() Next End Sub Private Sub Button3_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button3.Click ListBox1.Items.Clear() Dim fdlg As OpenFileDialog = New OpenFileDialog() fdlg.Title = "Browse for the FTP List you wish to use." fdlg.InitialDirectory = Application.ExecutablePath fdlg.Filter = "All files (*.txt)|*.txt|txt files (*.txt)|*.txt" fdlg.FilterIndex = 2 fdlg.RestoreDirectory = True If fdlg.ShowDialog() = DialogResult.OK Then 'ListBox1.Items.AddRange(Split(My.Computer.FileSystem.ReadAllText(fdlg.FileName), vbNewLine)) TextBox1.Text = My.Computer.FileSystem.ReadAllText(fdlg.FileName) End If Dim tmp() As String = TextBox1.Text.Split(CChar(vbNewLine)) For Each line As String In tmp If line.Length > 1 Then TextBox1.AppendText(line & vbNewLine) End If Next TextBox1.Text = TextBox1.Text.Replace(" ", "") TextBox1.Text = TextBox1.Text.Replace("----------------------------------------------------------", vbNewLine) For Each item As String In NewArray ListBox1.Items.Add(item) Next Try For i = 0 To ListBox1.Items.Count - 1 If ListBox1.Items(i).Contains(dateCrap) Then ListBox1.Items.RemoveAt(i) End If Next Catch ex As Exception End Try Try For i = 0 To ListBox1.Items.Count - 1 If ListBox1.Items(i).Contains(IPcrap) Then ListBox1.Items.RemoveAt(i) End If Next Catch ex As Exception End Try Try For i = 0 To ListBox1.Items.Count - 1 If ListBox1.Items(i).Contains(pcCrap) Then ListBox1.Items.RemoveAt(i) End If Next Catch ex As Exception End Try Try For i = 0 To ListBox1.Items.Count - 1 If ListBox1.Items(i).Contains(programCrap) Then ListBox1.Items.RemoveAt(i) End If Next Catch ex As Exception End Try TextBox1.Text = "" For Each thing As Object In ListBox1.Items TextBox1.AppendText(thing.ToString & vbNewLine) Next End Sub Private Sub Button4_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) For i = 0 To ListBox1.SelectedItems.Count - 1 TextBox1.Text = TextBox1.Text & ListBox1.Items.Item(i).ToString() & vbNewLine Next End Sub End Class

    Read the article

  • -[NSConcreteMutableData release]: message sent to deallocated instance

    - by kamibutt
    Dear members, I am facing a problem of -[NSConcreteMutableData release]: message sent to deallocated instance, i have attached my sample code as well. - (IBAction)uploadImage { NSString *urlString = @"http://192.168.1.108/iphoneimages/uploadfile.php?userid=1&charid=23&msgid=3"; //if(FALSE) for (int i=0; i<[imgArray count]; i++) { // setting up the request object now NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init]; [request setURL:[NSURL URLWithString:urlString]]; [request setHTTPMethod:@"POST"]; /* add some header info now we always need a boundary when we post a file also we need to set the content type You might want to generate a random boundary.. this is just the same as my output from wireshark on a valid html post */ NSString *boundary = [NSString stringWithString:@"---------------------------14737809831466499882746641449"]; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@",boundary]; [request addValue:contentType forHTTPHeaderField: @"Content-Type"]; /* now lets create the body of the post */ NSMutableData *body = [[NSMutableData data] autorelease]; NSString *str = [NSString stringWithFormat:@"Content-Disposition: form-data; name=\"userfile\"; filename=\"ipodfile%d.jpg\"\r\n",i]; [body appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:str] dataUsingEncoding:NSUTF8StringEncoding]]; [body appendData:[[NSString stringWithString:@"Content-Type: application/octet-stream\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; NSData *imageData = UIImageJPEGRepresentation([imgArray objectAtIndex:i], 90); [body appendData:[NSData dataWithData:imageData]]; [body appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; // setting the body of the post to the reqeust [request setHTTPBody:body]; // now lets make the connection to the web [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; //NSString *returnString = [[NSString alloc] initWithData:returnData encoding:NSUTF8StringEncoding]; //NSLog(@"%@",returnString); [imageData release]; [request release]; //[body release]; } } It successfully upload the images to the folder and there is no any error in the execution but when it complete it process and try to go back it give error -[NSConcreteMutableData release]: message sent to deallocated instance Please help me out. Thanks

    Read the article

  • Lighttpd + fastcgi + python (for django) slow on first request

    - by EagleOne
    I'm having a problem with a django website I host with lighttpd + fastcgi. It works great but it seems that the first request always takes up to 3seconds. Subsequent requests are much faster (<1s). I activated access logs in lighttpd in order to track the issue. But I'm kind of stuck. Here are logs where I 'lose' 4s (from 10:04:17 to 10:04:21): 2012-12-01 10:04:17: (mod_fastcgi.c.3636) handling it in mod_fastcgi 2012-12-01 10:04:17: (response.c.470) -- before doc_root 2012-12-01 10:04:17: (response.c.471) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.472) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.473) Path : 2012-12-01 10:04:17: (response.c.521) -- after doc_root 2012-12-01 10:04:17: (response.c.522) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.523) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.524) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:17: (response.c.541) -- logical -> physical 2012-12-01 10:04:17: (response.c.542) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.543) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.544) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:21: (response.c.128) Response-Header: HTTP/1.1 200 OK Last-Modified: Sat, 01 Dec 2012 09:04:21 GMT Expires: Sat, 01 Dec 2012 09:14:21 GMT Content-Type: text/html; charset=utf-8 Cache-Control: max-age=600 Transfer-Encoding: chunked Date: Sat, 01 Dec 2012 09:04:21 GMT Server: lighttpd/1.4.28 I guess that if there is a problem, it's whith my configuration. So here is the way I launch my django app: python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 And here is my lighttpd conf: server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_accesslog", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" accesslog.filename = "/var/log/lighttpd/access.log" debug.log-request-header = "enable" debug.log-response-header = "enable" debug.log-file-not-found = "enable" debug.log-request-handling = "enable" debug.log-timeouts = "enable" debug.log-ssl-noise = "enable" debug.log-condition-cache-handling = "enable" debug.log-condition-handling = "enable" fastcgi.server = ( "/finderauto.fcgi" => ( "main" => ( # Use host / port instead of socket for TCP fastcgi "host" => "127.0.0.1", "port" => 3033, #"socket" => "/home/finderadmin/finderauto.sock", "check-local" => "disable", "fix-root-scriptname" => "enable", ) ), ) alias.url = ( "/media" => "/home/user/django/contrib/admin/media/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/finderauto.fcgi$1", ) index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" If any of you could help me finding out where I lose these 3 or 4 s. I would much appreciate. Thanks in advance!

    Read the article

  • Creating a multiplatform webapp with HTML5 and Google maps

    - by Bart L.
    I'm struggling how to develop a webapp for Android and iOS. My first app was a simple todo app which was easy to test in my browser and it only used html, javascript and css. However, I have to create an app which uses Google Maps Api to get the location. I created a simple html5 page to test which places a marker on a map. It works fine when testing it on my local server. But when I create an .apk file for Android, the app doesn't work. So I'm wondering, isn't it possible to use it like this? Do I have the use the phonegap libraries to use their geolocation library? And if so, how do you handle the development of a webapp in phonegap for multiple OS? Do you have to install an Android environment and an iOS environment to each include the right phonegap library and to test them properly? Update: I use the following code on my webserver and it works perfectly. When I upload it in a zip-folder to the photogap cloud and install the APK file on my phone, it doesn't work. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Simple Geo test</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8/jquery.min.js"></script> </head> <body> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script> <script> function success(position) { var mapcanvas = document.createElement('div'); mapcanvas.id = 'mapcontainer'; mapcanvas.style.height = '200px'; mapcanvas.style.width = '200px'; document.querySelector('article').appendChild(mapcanvas); var coords = new google.maps.LatLng(position.coords.latitude, position.coords.longitude); var options = { zoom: 15, center: coords, mapTypeControl: false, navigationControlOptions: { style: google.maps.NavigationControlStyle.SMALL }, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById("mapcontainer"), options); var marker = new google.maps.Marker({ position: coords, map: map, title:"You are here!" }); } if (navigator.geolocation) { navigator.geolocation.getCurrentPosition(success); } else { error('Geo Location is not supported'); } </script> <article></article> </body> </html>

    Read the article

  • Not Inserted into database in PHP script

    - by Aruna
    Hi, i am having an form for uploading an excel file like <form enctype="multipart/form-data" action="http://localhost/joomla/Joomla_1.5.7/index.php?option=com_jumi&fileid=7" method="POST"> Please choose a file: <input name="file" type="file" id="file" /><br /> <input type="submit" value="Upload" /> </form> And in the FIle http://localhost/joomla/Joomla_1.5.7/index.php?option=com_jumi&fileid=7 i have retrived the uploaded file contents by <?php echo "Name". $_FILES['file']['name'] . "<br />"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />"; echo "Stored in: " . $_FILES["file"]["tmp_name"] . "<br />"; $string = file_get_contents( $_FILES["file"]["tmp_name"] ); foreach ( explode( "\n", $string ) as $userString ) { $userString = trim( $userString ); echo $userString; $array = explode( ';', $userString ); $userdata = array ( 'cf_id'=>trim( $array[0] ), 'text_0'=>trim( $array[1] ), 'text_1'=>trim( $array[2] ), 'text_2'=>trim( $array[3] ), 'text_3'=>trim($array[4]), 'text_6'=>trim($array[5]), 'text_7'=>trim($array[6]), 'text_9'=>trim($array[7]), 'text_12'=>trim($array[8])); global $db; $db =& JFactory::getDBO(); $query = 'INSERT INTO #__chronoforms_UploadAuthor VALUES ("'.$userdata['cf_id'].'","'.$userdata['text_0'].'","'.$userdata['text_1'].'","'.$userdata['text_2'].'","'.$userdata['text_3'].'","'.$userdata['text_6'].'","'.$userdata['text_7'].'","'.$userdata['text_9'].'")'; $db->Execute($query); echo "<br>"; } i am trying to insert the contents into the table #__chronoforms_UploadAuthor in Joomla database . it doent shows any error but it is not inserted into the database.. Please help me.. Y the contents are not inserted into the database...And how to make it inserted into the database..

    Read the article

  • need help in positioning

    - by teki
    i am using "beassistance validation" for my form and using tooltip plugin and it work fine except that positioning. please have a look at this link and i upload my form to get an idea what i am talking about. what i want is: if my validation fails i want to display the message after the tooltip icon, but its displaying the message before the tooltip icon. please see here here is the .css i am using: validation .css #aspnetForm { width: 670px; } #aspnetForm label.error { margin-left: 10px; width: auto; display: inline; } form.cmxform { width: 50em; } em.error { background: url("Images/unchecked.gif") no-repeat 0px 0px; padding-left: 16px; } em.success { background: url("Images/checked.gif") no-repeat 0px 0px; padding-left: 16px; } form.cmxform label.error { margin-left: auto; width: 250px; } #aspnetForm label.error { background: url("Images/unchecked.gif") no-repeat 0px 0px; padding-left: 16px; padding-bottom: 2px; font-weight: bold; color: #EA5200; } #aspnetForm label.checked { background: url("Images/checked.gif") no-repeat 0px 0px; } em.error { color: black; } #warning { display: none; } <asp:Label runat="server" ID='Label8' >Start Date:</asp:Label> <asp:TextBox ID="txtStartDate" runat='server' ></asp:TextBox> <a style="cursor: hand" class="tooltip" title="Select the starting date of a visit, example 05/05/2010 (MM/DD/YYYY)!"> <img alt="" src="Scripts/JQuery/Tooltip/icon_tooltip.gif" style="width: 11px; height: 11px" /></a> <p> </p> <asp:Label runat="server" ID='Label9' >End Date:</asp:Label> <asp:TextBox ID="txtEndDate" runat='server' ></asp:TextBox> <a style="cursor: hand" class="tooltip" title="Select the end date of a visit, example 05/06/2010 (MM/DD/YYYY)!"> <img alt="" src="Scripts/JQuery/Tooltip/icon_tooltip.gif" style="width: 11px; height: 11px" /></a> tooltip.css #tooltip{ position:absolute; border:1px solid #333; background:#f7f5d1; padding:2px 5px; color:#333; display:none; }

    Read the article

  • Week in Geek: USDA Chooses Microsoft for Cloud Services Edition

    - by Asian Angel
    This week we learned how to create geeky LED holiday lights with old bottles, dig deeper in Windows Defrag via the command prompt, use Google Chrome’s drag/drop feature to upload files easier, find great gift recommendations by looking through the How-To Geek holiday gift guide, and have fun adding Merry Christmas fonts to our computers. Photo by ntr23. Random Geek Links It has been a busy week, so we have extra news link goodness with information that is good for you to know. USDA making the move to Microsoft The U.S. Department of Agriculture has announced that it has chosen Microsoft to host things like e-mail, instant messaging, and collaboration through the software giant’s Business Productivity Online Suite. Google says it was cut off from USDA project bid Google is claiming that it was not given a chance to bid on a cloud-computing project for the U.S. Department of Agriculture, for which the contract was awarded to rival Microsoft. Apache is being forced into a Java Fork When Oracle rolled over Apache and Google’s objections to its Java plans in December, the scene was set for Apache to leave and, eventually, force a Java code fork. Tumblr explains daylong outage After experiencing an outage that started on Sunday afternoon and stretched through most of the day yesterday, Tumblr has explained what happened. Google demos Chrome OS, launches pilot program During a press briefing this week in San Francisco, Google launched the Chrome application store and demonstrated Chrome OS, its browser-centric netbook operating system. Don’t expect Spotify in U.S. this holiday season As of last week, Spotify had yet to sign a single licensing deal with a major label, after spending more than a year negotiating, multiple music sources told CNET. December 2010 Patch Tuesday will come with most bulletins ever According to the Microsoft Security Response Center, Microsoft will issue 17 Security Bulletins addressing 40 vulnerabilities on Tuesday, December 14. It will also host a webcast to address customer questions the following day. Hacker plants back door in Symbian firmware Indian hacker Atul Alex has had a look at the firmware for Symbian S60 smartphones and come up with a back door for it. PC quarantines raise tough complexities The concept of quarantining PCs to prevent widespread infection is “interesting, but difficult to implement, with far too many problems”, said security experts. Symantec: DDoS attacks hard to defend It has surfaced that the distributed denial of service (DDoS) attacks on Visa and MasterCard Web sites on Wednesday were carried out by a toolkit known as low orbit ion cannon (LOIC). Web Sockets and the risks of unfinished standards Enthusiasm for a promising new standard called Web Sockets has quickly cooled in some quarters as a potential security problem led some browser makers to hastily postpone support. Internet Explorer 9 to get tracking protection Microsoft is making changes to Internet Explorer 9’s security features that will better enable users to keep sites from tracking their activity across browsing sessions. NASA sold PCs with sensitive data NASA failed to remove sensitive data from computers that it sold, according to an audit report released this week. Cybercrooks create fake Amazon receipts The bad guys have created yet another online scam, this one involving fake Amazon receipts. World of Warcraft character move fees waived Until December 22, Blizzard will allow free realm transfers from 25 highly populated servers to alleviate log-in queues or performance issues. (The free transfers are one-way and one-time only.) SpaceX Dragon reaches orbit atop a Falcon with a fiery tail The Space Exploration Technologies corporation has become the first nongovernmental entity to put a vehicle into low Earth orbit. Geek Video of the Week If birds have wings, then why are the Angry Birds using slingshots? Photo by Dorkly Bits. Wait… Birds have Wings, Why are the Angry Ones Using Slingshots? Sysadmin Geek Tips How To Setup Email Alerts on Linux Using Gmail or SMTP Linux machines may require administrative intervention in countless ways, but without manually logging into them how would you know about it? Here’s how to setup emails to get notified when your machines want some tender love and attention. Random TinyHacker Links Red Panda Webcam Support Firefox and the Knoxville Zoo’s Red Panda program. Christmas Icons (Icons we like) Superb set of holiday icons by lgp85 at deviantArt. Download the .zip and use as .png or convert to .ico at Convertico.com or with tiny app Imagicon. Super User Questions Enjoy reading the great answers to this week’s popular questions from Super User Useful USB boot disks? DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder? What are other ways to backup my files if I do not have an external drive? Anti virus what is the difference between these all? How can I block all Facebook elements/content? How-To Geek Weekly Article Recap Have you had a busy week between work and preparing for the holidays? Get caught up on your HTG reading with our hottest articles of the week. 20 Windows Keyboard Shortcuts You Might Not Know The 50 Best Registry Hacks that Make Windows Better LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology HTG Explains: Which Linux File System Should You Choose? How to Use and Customize Google Chrome Web Apps One Year Ago on How-To Geek This week’s batch of retro geeky goodness is all about customizing Windows 7. ClassicShell Adds Classic Start Menu and Explorer Features to Windows 7 Get an Aero-Styled Classic Start Menu in Windows 7 Customize the Windows 7 Logon Screen Get the Classic Style Network Activity Indicator Back in Windows 7 How To Enable Check Boxes for Items In Windows 7 The Geek Note We would like you to join us in welcoming Jason Fitzpatrick to the writing staff here at How-To Geek. He started with us this past week, so take some time to read through his articles about the Wii, Kindle, & PlayStation 2 Peripherals and leave a friendly comment to say “Hi”! Got a great tip to share? Make sure to send it in to us at [email protected]. Photo by real00. Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Settle into Orbit with the Voyage Theme for Chrome and Iron Awesome Safari Compass Icons Set Escape from the Exploding Planet Wallpaper Move Your Tumblr Blog to WordPress Pytask is an Easy to Use To-Do List Manager for Your Ubuntu System Snowy Christmas House Personas Theme for Firefox

    Read the article

  • Week in Geek: LastPass Rescues Xmarks Edition

    - by Asian Angel
    This week we learned how to breathe new life into an aging Windows Mobile 6.x device, use filters in Photoshop, backup and move VirtualBox machines, use the BitDefender Rescue CD to clean an infected PC, and had fun setting up a pirates theme on our computers. Photo by _nash. Weekly Feature Do you love using the Faenza icon set on your Ubuntu system but feel that there are a few much needed icons missing (or you desire a different version of a particular icon)? Then you may want to take a look at the Faenza Variants icon pack. The icons are available in the following sizes: 16px, 22px, 32px, 48px and scalable sizes. Photo by Asian Angel. Faenza Variants Random Geek Links Another week with extra link goodness to help keep you on top of the news. Photo by Asian Angel. LastPass acquires Xmarks, premium service announced Xmarks announced that it has been acquired by LastPass, a cross-platform password management service. This also means that Xmarks is now in transition from a “free” to a “freemium” business model. WikiLeaks reappears on European Net domains WikiLeaks has re-emerged on a Swiss Internet domain followed by domains in Germany, Finland, and the Netherlands, sidestepping a move that had in effect taken the controversial site off the Internet. Iran: Yes, Stuxnet hurt our nuclear program The Stuxnet worm got some big play from Iranian President Mahmoud Ahmadinejad, who acknowledged that the malware dinged his nuclear program. More Windows Rogues than Just AV – Fake Defragmenter Check Disk Don’t think for a second that rogues are limited to scareware, because as so-called products such as “System Defragmenter”, “Scan Disk” “Check Disk” prove, they’re not. Internet Explorer’s Protected Mode can be bypassed Researchers from Verizon Business have now described a way of bypassing Protected Mode in IE 7 and 8 in order to gain access to user accounts. Can you really see who viewed your Facebook profile? Rogue application spreads virally Once again, a rogue application is spreading virally between Facebook users pretending to offer you a way of seeing who has viewed your profile. More holes in Palm’s WebOS Researchers Orlando Barrera and Daniel Herrera, who both work for security firm SecTheory, have discovered a gaping security hole in Palm’s WebOS smartphone operating system. Next-gen banking Trojans hit APAC With the proliferation of banking Trojans, Web and smartphone users of online banking services have to be on constant alert to avoid falling prey to fraud schemes, warned Etay Maor, project manager for RSA Fraud Action. AVG update cripples 64-bit computers A signature update automatically deployed by the AVG virus scanner Thursday has crippled numerous computers. Article includes link to forums to fix computers affected after a restart. Congress moves to outlaw ‘mystery charges’ for Web shoppers Legislation that makes it illegal for Web merchants and so-called post-transaction marketers to charge credit cards without the card owners’ say-so came closer to becoming law this week. Ballmer Set to “Look Into” Windows Home Server Drive Extender Fiasco Tuesday’s announcement from Microsoft regarding the removal of Drive Extender from Windows Home Server has sent shock waves across the web. Google tweaks search recipe to ding scam artists Google has changed its search algorithm to penalize sites deemed to provide an “extremely poor user experience” following a New York Times story on a merchant who justified abusive behavior towards customers as a search-engine optimization tactic. Geek Video of the Week Watch as our two friends debate back and forth about the early adoption of new technology through multiple time periods (Stone Age to the far future). Will our reluctant friend finally succumb to the temptation? Photo by CollegeHumor. Early Adopters Through History Random TinyHacker Links Fix Issues in Windows 7 Using Reliability Monitor Learn how to analyze Windows 7 errors and then fix them using the built-in reliability monitor. Learn About IE Tab Groups Tab groups is a useful feature in IE 8. Here’s a detailed guide to what it is all about. Google’s Book Helps You Learn About Browsers and Web A cool new online book by the Google Chrome team on browsers and the web. TrustPort Internet Security 2011 – Good Security from a Less Known Provider TrustPort is not exactly a well-known provider of security solutions. At least not in the consumer space. This review tests in detail their latest offering. How the World is Using Cell phones An infographic showing the shocking demographics of cell phone use. Super User Questions See the great answers to these questions from Super User. I am unable to access my C drive. It says it is unable to display current owner. List of Windows special directories/shortcuts like ‘%TEMP%’ Is using multiple passes for wiping a disk really necessary? How can I view two files side by side in Notepad++ Is there any tool that automatically puts screenshots to my Dropbox? How-To Geek Weekly Article Recap Look through our hottest articles from this past week at How-To Geek. How to Create a Software RAID Array in Windows 7 9 Alternatives for Windows Home Server’s Drive Extender Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder? Ask the Readers: How Much Do You Customize Your Operating System? How to Upload Really Large Files to SkyDrive, Dropbox, or Email One Year Ago on How-To Geek Enjoy reading through these awesome articles from one year ago. How To Upgrade from Vista to Windows 7 Home Premium Edition How To Fix No Aero Transparency in Windows 7 Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista Rename the Guest Account in Windows 7 for Enhanced Security Disable Error Reporting in XP, Vista, and Windows 7 The Geek Note That wraps things up here for this week. Regardless of the weather wherever you may be, we hope that you have an opportunity to get outside and have some fun! Remember to keep sending those great tips in to us at [email protected]. Photo by Tony the Misfit. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • Week in Geek: US Govt E-card Scam Siphons Confidential Data Edition

    - by Asian Angel
    This week we learned how to “back up photos to Flickr, automate repetitive tasks, & normalize MP3 volume”, enable “stereo mix” in Windows 7 to record audio, create custom papercraft toys, read up on three alternatives to Apple’s flaky iOS alarm clock, decorated our desktops & app docks with Google icon packs, and more. Photo by alexschlegel. Random Geek Links It has been a busy week on the security & malware fronts and we have a roundup of the latest news to help keep you updated. Photo by TopTechWriter.US. US govt e-card scam hits confidential data A fake U.S. government Christmas e-card has managed to siphon off gigabytes of sensitive data from a number of law enforcement and military staff who work on cybersecurity matters, many of whom are involved in computer crime investigations. Security tool uncovers multiple bugs in every browser Michal Zalewski reports that he discovered the vulnerability in Internet Explorer a while ago using his cross_fuzz fuzzing tool and reported it to Microsoft in July 2010. Zalewski also used cross_fuzz to discover bugs in other browsers, which he also reported to the relevant organisations. Microsoft to fix Windows holes, but not ones in IE Microsoft said that it will release two security bulletins next week fixing three holes in Windows, but it is still investigating or working on fixing holes in Internet Explorer that have been reportedly exploited in attacks. Microsoft warns of Windows flaw affecting image rendering Microsoft has warned of a Windows vulnerability that could allow an attacker to take control of a computer if the user is logged on with administrative rights. Windows 7 Not Affected by Critical 0-Day in the Windows Graphics Rendering Engine While confirming that details on a Critical zero-day vulnerability have made their way into the wild, Microsoft noted that customers running the latest iteration of Windows client and server platforms are not exposed to any risks. Microsoft warns of Office-related malware Microsoft’s Malware Protection Center issued a warning this week that it has spotted malicious code on the Internet that can take advantage of a flaw in Word and infect computers after a user does nothing more than read an e-mail. *Refers to a flaw that was addressed in the November security patch releases. Make sure you have all of the latest security updates installed. Unpatched hole in ImgBurn disk burning application According to security specialist Secunia, a highly critical vulnerability in ImgBurn, a lightweight disk burning application, can be used to remotely compromise a user’s system. Hole in VLC Media Player Virtual Security Research (VSR) has identified a vulnerability in VLC Media Player. In versions up to and including 1.1.5 of the VLC Media Player. Flash Player sandbox can be bypassed Flash applications run locally can read local files and send them to an online server – something which the sandbox is supposed to prevent. Chinese auction site touts hacked iTunes accounts Tens of thousands of reportedly hacked iTunes accounts have been found on Chinese auction site Taobao, but the company claims it is unable to take action unless there are direct complaints. What happened in the recent Hotmail outage Mike Schackwitz explains the cause of the recent Hotmail outage. DOJ sends order to Twitter for Wikileaks-related account info The U.S. Justice Department has obtained a court order directing Twitter to turn over information about the accounts of activists with ties to Wikileaks, including an Icelandic politician, a legendary Dutch hacker, and a U.S. computer programmer. Google gets court to block Microsoft Interior Department e-mail win The U.S. Federal Claims Court has temporarily blocked Microsoft from proceeding with the $49.3 million, five-year DOI contract that it won this past November. Google Apps customers get email lockdown Companies and organisations using Google Apps are now able to restrict the email access of selected users. LibreOffice Is the Default Office Suite for Ubuntu 11.04 Matthias Klose has announced some details regarding the replacement of the old OpenOffice.org 3.2.1 packages with the new LibreOffice 3.3 ones, starting with the upcoming Ubuntu 11.04 (Natty Narwhal) Alpha 2 release. Sysadmin Geek Tips Photo by Filomena Scalise. How to Setup Software RAID for a Simple File Server on Ubuntu Do you need a file server that is cheap and easy to setup, “rock solid” reliable, and has Email Alerting? This tutorial shows you how to use Ubuntu, software RAID, and SaMBa to accomplish just that. How to Control the Order of Startup Programs in Windows While you can specify the applications you want to launch when Windows starts, the ability to control the order in which they start is not available. However, there are a couple of ways you can easily overcome this limitation and control the startup order of applications. Random TinyHacker Links Using Opera Unite to Send Large Files A tutorial on using Opera Unite to easily send huge files from your computer. WorkFlowy is a Useful To-do List Tool A cool to-do list tool that lets you integrate multiple tasks in one single list easily. Playing Flash Videos on iOS Devices Yes, you can play flash videos on jailbroken iPhones. Here’s a tutorial. Clear Safari History and Cookies On iPhone A tutorial on clearing your browser history on iPhone and other iOS devices. Monitor Your Internet Usage Here’s a cool, cross-platform tool to monitor your internet bandwidth. Super User Questions See what the community had to say on these popular questions from Super User this week. Why is my upload speed much less than my download speed? Where should I find drivers for my laptop if it didn’t come with a driver disk? OEM Office 2010 without media – how to reinstall? Is there a point to using theft tracking software like Prey on my laptop, if you have login security? Moving an “all-in-one” PC when turned on/off How-To Geek Weekly Article Recap Get caught up on your HTG reading with our hottest articles from this past week. How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk How To Boot 10 Different Live CDs From 1 USB Flash Drive What is Camera Raw, and Why Would a Professional Prefer it to JPG? Did You Know Facebook Has Built-In Shortcut Keys? The How-To Geek Guide to Audio Editing: The Basics One Year Ago on How-To Geek Enjoy looking through our latest gathering of retro article goodness. Learning Windows 7: Create a Homegroup & Join a New Computer To It How To Disconnect a Machine from a Homegroup Use Remote Desktop To Access Other Computers On a Small Office or Home Network How To Share Files and Printers Between Windows 7 and Vista Allow Users To Run Only Specified Programs in Windows 7 The Geek Note That is all we have for you this week and we hope your first week back at work or school has gone very well now that the holidays are over. Know a great tip? Send it in to us at [email protected]. Photo by Pamela Machado. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video]

    Read the article

  • Interesting things – Twitter annotations and your phone as a web server

    - by jamiet
    I overheard/read a couple of things today that really made me, data junkie that I am, take a step back and think, “Hmmm, yeah, that could be really interesting” and I wanted to make a note of them here so that (a) I could bring them to the attention of anyone that happens to read this and (b) I can maybe come back here in a few years and see if either of these have come to fruition. Your phone as a web server While listening to Jon Udell’s (twitter) “Interviews with Innovators Podcast” today in which he interviewed Herbert Van de Sompel (twitter) about his Momento project. During the interview Jon and Herbert made the following remarks: Jon: [some people] really had this vision of a web of servers, the notion that every node on the internet, every connected entity, is potentially a server and a client…we can see where we’re getting to a point where these endpoint devices we have in our pockets are going to be massively capable and it may be in the not too distant future that significant chunks of the web archive will be cached all over the place including on your own machine… Herbert: wasn’t it Opera who at one point turned your browser into a server? That really got my brain ticking. We all carry a mobile phone with us and therefore we all potentially carry a mobile web server with us as well and to my mind the only thing really stopping that from happening is the capabilities of the phone hardware, the capabilities of the network infrastructure and the will to just bloody do it. Certainly all the standards required for addressing a web server on a phone already exist (to this uninitiated observer DNS and IPv6 seem to solve that problem) so why not? I tweeted about the idea and Rory Street answered back with “why would you want a phone to be a web server?”: Its a fair question and one that I would like to try and answer. Mobile phones are increasingly becoming our window onto the world as we use them to upload messages to Twitter, record our location on FourSquare or interact with our friends on Facebook but in each of these cases some other service is acting as our intermediary; to see what I’m thinking you have to go via Twitter, to see where I am you have to go to FourSquare (I’m using ‘I’ liberally, I don’t actually use FourSquare before you ask). Why should this have to be the case? Why can’t that data be decentralised? Why can’t we be masters of our own data universe? If my phone acted as a web server then I could expose all of that information without needing those intermediary services. I see a time when we can pass around URLs such as the following: http://jamiesphone.net/location/current - Where is Jamie right now? http://jamiesphone.net/location/2010-04-21 – Where was Jamie on 21st April 2010? http://jamiesphone.net/thoughts/current – What’s on Jamie’s mind right now? http://jamiesphone.net/blog – What documents is Jamie sharing with me? http://jamiesphone.net/calendar/next7days – Where is Jamie planning to be over the next 7 days? and those URLs get served off of the phone in our pockets. If we govern that data then we can control who has access to it and (crucially) how long its available for. Want to wipe yourself off the face of the web? its pretty easy if you’re in control of all the data – just turn your phone off. None of this exists today but I look forward to a time when it does. Opera really were onto something last June when they announced Opera Unite (admittedly Unite only works because Opera provide an intermediary DNS-alike system – it isn’t totally decentralised). Opening up Twitter annotations Last week Twitter held their first developer conference called Chirp where they announced an upcoming new feature called ‘Twitter Annotations’; in short this will allow us to attach metadata to a Tweet thus enhancing the tweet itself. Think of it as a richer version of hashtags. To think of it another way Twitter are turning their data into a humongous Entity-Attribute-Value or triple-tuple store. That alone has huge implications both for the web and Twitter as a whole – the ability to enrich that 140 characters data and thus make it more useful is indeed compelling however today I stumbled upon a blog post from Eugene Mandel entitled Tweet Annotations – a Way to a Metadata Marketplace? where he proposed the idea of allowing tweets to have metadata added by people other than the person who tweeted the original tweet. This idea really fascinated me especially when I read some of the potential uses that Eugene and his commenters suggested. They included: Amazon could attach an ISBN to a tweet that mentions a book. Specialist clients apps for book lovers could be built up around this metadata. Advertisers could pay to place adverts in metadata. The revenue generated from those adverts could be shared with the tweeter or people who add the metadata. Granted, allowing anyone to add metadata to a tweet has the potential to create a spam problem the like of which we haven’t even envisaged but spam hasn’t halted the growth of the web and neither should it halt the growth of data annotations either. The original tweeter should of course be able to determine who can add metadata and whether it should be moderated. As Eugene says himself: Opening publishing tweet annotations to anyone will open the way to a marketplace of metadata where client developers, data mining companies and advertisers can add new meaning to Twitter and build innovative businesses. What Eugene and his followers did not mention is what I think is potentially the most fascinating use of opening up annotations. Google’s success today is built on their page rank algorithm that measures the validity of a web page by the number of incoming links to it and the page rank of the sites containing those links – its a system built on reputation. Twitter annotations could open up a new paradigm however – let’s call it People rank- where reputation can be measured by the metadata that people choose to apply to links and the websites containing those links. Its not hard to see why Google and Microsoft have paid big bucks to get access to the Twitter firehose! Neither of these features, phones as a web server or the ability to add annotations to other people’s tweets, exist today but I strongly believe that they could dramatically enhance the web as we know it today. I hope to look back on this blog post in a few years in the knowledge that these ideas have been put into place. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Azure Reporting Limited CTP Arrived

    - by Shaun
    It’s about 3 months later when I registered the SQL Azure Reporting CTP on the Microsoft Connect after TechED 2010 China. Today when I checked my mailbox I found that the SQL Azure team had just accepted my request and sent the activation code over to me. So let’s have a look on the new SQL Azure Reporting.   Concept The SQL Azure Reporting provides cloud-based reporting as a service, built on SQL Server Reporting Services and SQL Azure technologies. Cloud-based reporting solutions such as SQL Azure Reporting provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead for report servers; and secure access, viewing, and management of reports. By using the SQL Azure Reporting service, we can do: Embed the Visual Studio Report Viewer ADO.NET Ajax control or Windows Form control to view the reports deployed on SQL Azure Reporting Service in our web or desktop application. Leverage the SQL Azure Reporting SOAP API to manage and retrieve the report content from any kinds of application. Use the SQL Azure Reporting Service Portal to navigate and view the reports deployed on the cloud. Since the SQL Azure Reporting was built based on the SQL Server 2008 R2 Reporting Service, we can use any tools we are familiar with, such as the SQL Server Integration Studio, Visual Studio Report Viewer. The SQL Azure Reporting Service runs as a remote SQL Server Reporting Service just on the cloud rather than on a server besides us.   Establish a New SQL Azure Reporting Let’s move to the windows azure deveploer portal and click the Reporting item from the left side navigation bar. If you don’t have the activation code you can click the Sign Up button to send a requirement to the Microsoft Connect. Since I already recieved the received code mail I clicked the Provision button. Then after agree the terms of the service I will select the subscription for where my SQL Azure Reporting CTP should be provisioned. In this case I selected my free Windows Azure Pass subscription. Then the final step, paste the activation code and enter the password of our SQL Azure Reporting Service. The user name of the SQL Azure Reporting will be generated by SQL Azure automatically. After a while the new SQL Azure Reporting Server will be shown on our developer portal. The Reporting Service URL and the user name will be shown as well. We can reset the password from the toolbar button.   Deploy Report to SQL Azure Reporting If you are familiar with SQL Server Reporting Service you will find this part will be very similar with what you know and what you did before. Firstly we open the SQL Server Business Intelligence Development Studio and create a new Report Server Project. Then we will create a shared data source where the report data will be retrieved from. This data source can be SQL Azure but we can use local SQL Server or other database if it opens the port up. In this case we use a SQL Azure database located in the same data center of our reporting service. In the Credentials tab page we entered the user name and password to this SQL Azure database. The SQL Azure Reporting CTP only available at the North US Data Center now so that the related SQL Server and hosted service might be better to select the same data center to avoid the external data transfer fee. Then we create a very simple report, just retrieve all records from a table named Members and have a table in the report to list them. In the data source selection step we choose the shared data source we created before, then enter the T-SQL to select all records from the Member table, then put all fields into the table columns. The report will be like this as following In order to deploy the report onto the SQL Azure Reporting Service we need to update the project property. Right click the project node from the solution explorer and select the property item. In the Target Server URL item we will specify the reporting server URL of our SQL Azure Reporting. We can go back to the developer portal and select the reporting node from the left side, then copy the Web Service URL and paste here. But notice that we need to append “/reportserver” after pasted. Then just click the Deploy menu item in the context menu of the project, the Visual Studio will compile the report and then upload to the reporting service accordingly. In this step we will be prompted to input the user name and password of our SQL Azure Reporting Service. We can get the user name from the developer portal, just next to the Web Service URL in the SQL Azure Reporting page. And the password is the one we specified when created the reporting service. After about one minute the report will be deployed succeed.   View the Report in Browser SQL Azure Reporting allows us to view the reports which deployed on the cloud from a standard browser. We copied the Web Service URL from the reporting service main page and appended “/reportserver” in HTTPS protocol then we will have the SQL Azure Reporting Service login page. After entered the user name and password of the SQL Azure Reporting Service we can see the directories and reports listed. Click the report will launch the Report Viewer to render the report.   View Report in a Web Role with the Report Viewer The ASP.NET and Windows Form Report Viewer works well with the SQL Azure Reporting Service as well. We can create a ASP.NET Web Role and added the Report Viewer control in the default page. What we need to change to the report viewer are Change the Processing Mode to Remote. Specify the Report Server URL under the Server Remote category to the URL of the SQL Azure Reporting Web Service URL with “/reportserver” appended. Specify the Report Path to the report which we want to display. The report name should NOT include the extension name. For example my report was in the SqlAzureReportingTest project and named MemberList.rdl then the report path should be /SqlAzureReportingTest/MemberList. And the next one is to specify the SQL Azure Reporting Credentials. We can use the following class to wrap the report server credential. 1: private class ReportServerCredentials : IReportServerCredentials 2: { 3: private string _userName; 4: private string _password; 5: private string _domain; 6:  7: public ReportServerCredentials(string userName, string password, string domain) 8: { 9: _userName = userName; 10: _password = password; 11: _domain = domain; 12: } 13:  14: public WindowsIdentity ImpersonationUser 15: { 16: get 17: { 18: return null; 19: } 20: } 21:  22: public ICredentials NetworkCredentials 23: { 24: get 25: { 26: return null; 27: } 28: } 29:  30: public bool GetFormsCredentials(out Cookie authCookie, out string user, out string password, out string authority) 31: { 32: authCookie = null; 33: user = _userName; 34: password = _password; 35: authority = _domain; 36: return true; 37: } 38: } And then in the Page_Load method, pass it to the report viewer. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: ReportViewer1.ServerReport.ReportServerCredentials = new ReportServerCredentials( 4: "<user name>", 5: "<password>", 6: "<sql azure reporting web service url>"); 7: } Finally deploy it to Windows Azure and enjoy the report.   Summary In this post I introduced the SQL Azure Reporting CTP which had just available. Likes other features in Windows Azure, the SQL Azure Reporting is very similar with the SQL Server Reporting. As you can see in this post we can use the existing and familiar tools to build and deploy the reports and display them on a website. But the SQL Azure Reporting is just in the CTP stage which means It is free. There’s no support for it. Only available at the North US Data Center. You can get more information about the SQL Azure Reporting CTP from the links following SQL Azure Reporting Limited CTP at MSDN SQL Azure Reporting Samples at TechNet Wiki You can download the solutions and the projects used in this post here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows Azure Evolution &ndash; Welcome to VS2012

    - by Shaun
    When the Microsoft released the first preview version of Windows 8 and Visual Studio, many people in the community were asking if the windows azure tool is available to it. The answer was “NO”. Microsoft promised that the windows azure tool will only support the Visual Studio 2010 but when the 2012 was final released, windows azure tool should be work. But now alone with the new windows azure platform was published we got the latest Windows Azure SDK 1.7, which is compatible to the Visual Studio 2012 RC.   You can retrieve the latest version of the Windows Azure SDK through Web Platform Installer, which I think it’s the easiest and simplest way to download and install, since besides the SDK itself it also needs some other components. To download the latest windows azure SDK from Web Platform Installer, just go to the windows azure website and clicked the Develop, .NET and click the blue “install” button. Then you need to select which version of Visual Studio you want to use, Visual Studio 2010 or Visual Studio 2012 RC. After selected the current version you will download an EXE file. This file will lead you to install the Web Platform Installer 4.0 (if you haven’t installed) and the latest windows azure SDK. You can see the version name is June 2012, 1.7. Finally the WebPI will detect the dependent components you need to download and begin to install. But if you want to challenge yourself you can download the components and install them manually. The standalone installations are listed in this page with the instruction on how to install them with necessary pre-requirements.   Once you finished the installation you can open the Visual Studio 2012 RC and as usual, it need to be run as administrator. If you clicked the New Project link from the start page, navigated to Cloud category you will find that there no project template available. Is there anything wrong? So, if you changed the target framework from the default .NET 4.5 to .NET 4 you will see the azure project template. This is because, currently the windows azure instance does not support .NET 4.5. After clicked OK you will see the role creation window, which is similar as what you have seen before. But there are some new role templates in this SDK. Firstly you will have ASP.NET MVC 4 web role available, which means you can create ASP.NET MVC 4 applications for internet, intranet, mobile and WebAPI on the cloud. Then there are two new worker role templates, “Cache Worker Role” and “Worker Role with Service Bus Queue”. “Worker Role with Service Bus Queue” is a worker role which had added necessary references to access the Windows Azure Service Bus Queue. It also have some basic sample code in the worker role class which could read messages from the queue when started. The “Cache Worker Role” is a worker role which has the in-memory distributed cache feature enabled by default. This feature is different than the Windows Azure Caching. It allows the role instance to use its memory as a in-memory distributed cache clusters. By using this feature you can have one or more worker roles as some dedicate cache clusters. Alternatively, you can make part of your web role and worker role’s memory as the cache clusters as well. Let’s just create an ASP.NET MVC 4 Web Role, and click F5 to run it under the local emulator. If you have been working with azure for a while you should know that I need to setup the local storage emulator before running locally if it’s a fresh azure SDK installation. But in this version when we started our azure project the Visual Studio will check if the storage emulator had been initialized. If not, it will run the initializer automatically. And as you can see, in this version the storage emulator relies on the SQL Server 2012 Local DB feature. It will create the emulator database and tables in the default local database. You can set the storage emulator to use a standard SQL Server default instance by using the command “dsinit /instance:.”. The “dsinit” tool now is located at %PROGRAM FILES%\Microsoft SDKs\Windows Azure\Emulator\devstore After the Visual Studio complied and deployed the package our website should be shown in the browser. This is the MVC 4 Web Role home page on my Windows 8 machine in IE10. Another thing you might notice is that, in this version the compute emulator utilizes IIS Express to host the web roles instead of the full IIS. You can add breakpoint in the code and debug, and you can use the local storage emulator to test your code for accessing the storage service. All of them are same as what your are doing now on SDK 1.6. You can switch to use IIS to run your web role in local emulator. Just open the windows azure porject property windows, in the Web page select “Use IIS Web Server”. For more information about this please have a look on Nuno’s blog post. In the role property page in Visual Studio there’s no massive changes. You can configure your role settings such as the endpoints, certificates and local storage, etc.. One thing was added is the Caching tab. Here you can specify enable the caching feature or not, and how much memory you want to use as the cache cluster. I will introduce more details about it in the future posts. The publish and package feature are also no change. You can publish your project to azure directly through Visual Studio 2012, while you can create the package and upload manually. Below is the SDK version of my deployment which is 1.7.30602.1703 in the developer portal.   Summary In this post I introduced about the new Windows Azure SDK 1.7 especially on how it works on the latest Visual Studio 2012 RC. There’s no significant changes in the visual studio tool in this version but some small enhancement such as ASP.NET MVC 4, Cache Worker Role, using SQL 2012 Local DB and IIS Express, etc..   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Tricks and Optimizations for you Sitecore website

    - by amaniar
    When working with Sitecore there are some optimizations/configurations I usually repeat in order to make my app production ready. Following is a small list I have compiled from experience, Sitecore documentation, communicating with Sitecore Engineers etc. This is not supposed to be technically complete and might not be fit for all environments.   Simple configurations that can make a difference: 1) Configure Sitecore Caches. This is the most straight forward and sure way of increasing the performance of your website. Data and item cache sizes (/databases/database/ [id=web] ) should be configured as needed. You may start with a smaller number and tune them as needed. <cacheSizes hint="setting"> <data>300MB</data> <items>300MB</items> <paths>5MB</paths> <standardValues>5MB</standardValues> </cacheSizes> Tune the html, registry etc cache sizes for your website.   <cacheSizes> <sites> <website> <html>300MB</html> <registry>1MB</registry> <viewState>10MB</viewState> <xsl>5MB</xsl> </website> </sites> </cacheSizes> Tune the prefetch cache settings under the App_Config/Prefetch/ folder. Sample /App_Config/Prefetch/Web.Config: <configuration> <cacheSize>300MB</cacheSize> <!--preload items that use this template--> <template desc="mytemplate">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</template> <!--preload this item--> <item desc="myitem">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }</item> <!--preload children of this item--> <children desc="childitems">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</children> </configuration> Break your page into sublayouts so you may cache most of them. Read the caching configuration reference: http://sdn.sitecore.net/upload/sitecore6/sc62keywords/cache_configuration_reference_a4.pdf   2) Disable Analytics for the Shell Site <site name="shell" virtualFolder="/sitecore/shell" physicalFolder="/sitecore/shell" rootPath="/sitecore/content" startItem="/home" language="en" database="core" domain="sitecore" loginPage="/sitecore/login" content="master" contentStartItem="/Home" enableWorkflow="true" enableAnalytics="false" xmlControlPage="/sitecore/shell/default.aspx" browserTitle="Sitecore" htmlCacheSize="2MB" registryCacheSize="3MB" viewStateCacheSize="200KB" xslCacheSize="5MB" />   3) Increase the Check Interval for the MemoryMonitorHook so it doesn’t run every 5 seconds (default). <hook type="Sitecore.Diagnostics.MemoryMonitorHook, Sitecore.Kernel"> <param desc="Threshold">800MB</param> <param desc="Check interval">00:05:00</param> <param desc="Minimum time between log entries">00:01:00</param> <ClearCaches>false</ClearCaches> <GarbageCollect>false</GarbageCollect> <AdjustLoadFactor>false</AdjustLoadFactor> </hook>   4) Set Analytics.PeformLookup (Sitecore.Analytics.config) to false if your environment doesn’t have access to the internet or you don’t intend to use reverse DNS lookup. <setting name="Analytics.PerformLookup" value="false" />   5) Set the value of the “Media.MediaLinkPrefix” setting to “-/media”: <setting name="Media.MediaLinkPrefix" value="-/media" /> Add the following line to the customHandlers section: <customHandlers> <handler trigger="-/media/" handler="sitecore_media.ashx" /> <handler trigger="~/media/" handler="sitecore_media.ashx" /> <handler trigger="~/api/" handler="sitecore_api.ashx" /> <handler trigger="~/xaml/" handler="sitecore_xaml.ashx" /> <handler trigger="~/icon/" handler="sitecore_icon.ashx" /> <handler trigger="~/feed/" handler="sitecore_feed.ashx" /> </customHandlers> Link: http://squad.jpkeisala.com/2011/10/sitecore-media-library-performance-optimization-checklist/   6) Performance counters should be disabled in production if not being monitored <setting name="Counters.Enabled" value="false" />   7) Disable Item/Memory/Timing threshold warnings. Due to the nature of this component, it brings no value in production. <!--<processor type="Sitecore.Pipelines.HttpRequest.StartMeasurements, Sitecore.Kernel" />--> <!--<processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel"> <TimingThreshold desc="Milliseconds">1000</TimingThreshold> <ItemThreshold desc="Item count">1000</ItemThreshold> <MemoryThreshold desc="KB">10000</MemoryThreshold> </processor>—>   8) The ContentEditor.RenderCollapsedSections setting is a hidden setting in the web.config file, which by default is true. Setting it to false will improve client performance for authoring environments. <setting name="ContentEditor.RenderCollapsedSections" value="false" />   9) Add a machineKey section to your Web.Config file when using a web farm. Link: http://msdn.microsoft.com/en-us/library/ff649308.aspx   10) If you get errors in the log files similar to: WARN Could not create an instance of the counter 'XXX.XXX' (category: 'Sitecore.System') Exception: System.UnauthorizedAccessException Message: Access to the registry key 'Global' is denied. Make sure the ApplicationPool user is a member of the system “Performance Monitor Users” group on the server.   11) Disable WebDAV configurations on the CD Server if not being used. More: http://sitecoreblog.alexshyba.com/2011/04/disable-webdav-in-sitecore.html   12) Change Log4Net settings to only log Errors on content delivery environments to avoid unnecessary logging. <root> <priority value="ERROR" /> <appender-ref ref="LogFileAppender" /> </root>   13) Disable Analytics for any content item that doesn’t add value. For example a page that redirects to another page.   14) When using Web User Controls avoid registering them on the page the asp.net way: <%@ Register Src="~/layouts/UserControls/MyControl.ascx" TagName="MyControl" TagPrefix="uc2" %> Use Sublayout web control instead – This way Sitecore caching could be leveraged <sc:Sublayout ID="ID" Path="/layouts/UserControls/MyControl.ascx" Cacheable="true" runat="server" />   15) Avoid querying for all children recursively when all items are direct children. Sitecore.Context.Database.SelectItems("/sitecore/content/Home//*"); //Use: Sitecore.Context.Database.GetItem("/sitecore/content/Home");   16) On IIS — you enable static & dynamic content compression on CM and CD More: http://technet.microsoft.com/en-us/library/cc754668%28WS.10%29.aspx   17) Enable HTTP Keep-alive and content expiration in IIS.   18) Use GUID’s when accessing items and fields instead of names or paths. Its faster and wont break your code when things get moved or renamed. Context.Database.GetItem("{324DFD16-BD4F-4853-8FF1-D663F6422DFF}") Context.Item.Fields["{89D38A8F-394E-45B0-826B-1A826CF4046D}"]; //is better than Context.Database.GetItem("/Home/MyItem") Context.Item.Fields["FieldName"]   Hope this helps.

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >