Search Results

Search found 17503 results on 701 pages for 'bean validation model'.

Page 353/701 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • Details View and integration with TinyMCE <%@ Page validateRequest="false" %>

    - by GibboK
    I use TinyMCE in a DetailView in in EDIT MODE. I would like to know if there is a solution which can prevent Request Validation to trigger an error WITHOUT USING <%@ Page validateRequest="false" %> for my page. The only way I found out at the moment is to encode TextBox used by TinyMCE using option: "xml" tinyMCE.init({ encoding: "xml", In this way Request Validation does not trigger error but at the time to read the data in the TextBox the result it is Encoded. I also tried to Decode on PageLoad the content of the TextBox using this code myTextBox.Text = HttpUtility.HtmlDecode(myTextBox.Text) But the result is not as expected, so I can visualize it just Encoded text. Any Ideas? Thanks UPDATE I found a solution to my problem. I added in _DataBound event for the DetailsView this code TextBox myContentAuthor = (TextBox)uxAuthorListDetailsView.FindControl("uxContentAuthorInput"); myContentAuthor.Text = HttpUtility.HtmlDecode(myContentAuthor.Text); So on DataBound event, (should work even on post back) the content will be decodene for textbox tinymce. Here how should work: 01 - TinyMCE ESCAPE data inserted in textbox using function encoding: "xml", 02 - Data has been stored as ESCAPED 03 - To read the data and add its content to a TextBox where apply TinyMCE use in DATABOUND EVENT for DetailView and HttpUtility.HtmlDecode (so it will look decoded) 04 - You can modify content in the textbox in edit mode. On post back TinyMCE will encoded again using encoding: "xml" an so on Hope guys can help some one else. But please give me your comment on this solution thanks! Mybe you come up with more elegant solution! :-)

    Read the article

  • XSD, restrictions and code generation

    - by bob
    Hello, I'm working on some code generation for an existing project and I want to start from a xsd. So I can use tools as Xsd2Code / xsd.exe to generate the code and also the use the xsd to validate the xml. That part works without any problems. I also want to translate some of the restrictions to DataAnnotations (enrich Xsd2Code). For example xs:minInclusive / xs:maxInclusive I can translate to a RangeAttribute. But what to do with custom validation attributes that we created? Can I add custom facets / restrictions? And how? Or is there another solution / best practice. I would like to collect everything in a single (xsd) file so that one file contains the structure of the class (model) including the validation (attributes) that has to be added. <xs:element name="CertainValue"> <xs:simpleType> <xs:restriction base="xs:double"> <xs:minInclusive value="1" /> <xs:maxInclusive value="100" /> <xs_custom:customRule attribute="value" /> </xs:restriction> </xs:simpleType> </xs:element>

    Read the article

  • What is the difference between these two linq implementations?

    - by Mahesh Velaga
    I was going through Jon Skeet's Reimplemnting Linq to Objects series. In the implementation of where article, I found the following snippets, but I don't get what is the advantage that we are gettting by splitting the original method into two. Original Method: // Naive validation - broken! public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Refactored Method: public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } return WhereImpl(source, predicate); } private static IEnumerable<TSource> WhereImpl<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Jon says - Its for eager validation and then defferring for the rest of the part. But, I don't get it. Could some one please explain it in a little more detail, whats the difference between these 2 functions and why will the validations be performed in one and not in the other eagerly? Conclusion/Solution: I got confused due to my lack of understanding on which functions are determined to be iterator-generators. I assumed that, it is based on signature of a method like IEnumerable<T>. But, based on the answers, now I get it, a method is an iterator-generator if it uses yield statements.

    Read the article

  • Can you stop a defered callback in jquery 1.5?

    - by chobo2
    Hi I am wondering say you have something like this // Assign handlers immediately after making the request, // and remember the jqxhr object for this request var jqxhr = $.ajax({ url: "example.php" }) .success(function(response) { alert("success"); }) // perform other work here ... // Set another success function for the request above jqxhr.success(function(response){ alert("second success"); }); So I am thinking this. I have a general function that I want to use on all my responses that would be passed into my success. This function basically does a check to see if the server validation found any errors. If it did they it formats it and displays a message. Now I am wondering if I could some how have the second success function to then do specific stuff. Like say One ajax request needs to add a row into a table. So this should be possible. I just do what I have above and in the second success I just add the row. Is it possible though that if the first success runs through and see that there are validation errors from the server that I can stop the second success from happening? Sort of If(first success finds errors) { // print out errors // don't continue onto next success } else { // go to next success } Edit I found that there is something call deferred.reject and this does stop it but I am wondering how can I specify to stop only the success one. Since my thinking is if there are other deffered ones like complete on it will the be rejected too?

    Read the article

  • JPA entitylisteners and @embeddable

    - by seanizer
    I have a class hierarchy of JPA entities that all inherit from a BaseEntity class: @MappedSuperclass @EntityListeners( { ValidatorListener.class }) public abstract class BaseEntity implements Serializable { // other stuff } I want all entities that implement a given interface to be validated automatically on persist and/or update. Here's what I've got. My ValidatorListener: public class ValidatorListener { private enum Type { PERSIST, UPDATE } @PrePersist public void checkPersist(final Object entity) { if (entity instanceof Validateable) { this.check((Validateable) entity, Type.PERSIST); } } @PreUpdate public void checkUpdate(final Object entity) { if (entity instanceof Validateable) { this.check((Validateable) entity, Type.UPDATE); } } private void check(final Validateable entity, final Type persist) { switch (persist) { case PERSIST: if (entity instanceof Persist) { ((Persist) entity).persist(); } if (entity instanceof PersistOrUpdate) { ((PersistOrUpdate) entity).persistOrUpdate(); } break; case UPDATE: if (entity instanceof Update) { ((Update) entity).update(); } if (entity instanceof PersistOrUpdate) { ((PersistOrUpdate) entity).persistOrUpdate(); } break; default: break; } } } and here's my Validateable interface that it checks against (the outer interface is just a marker, the inner contain the methods): public interface Validateable { interface Persist extends Validateable { void persist(); } interface PersistOrUpdate extends Validateable { void persistOrUpdate(); } interface Update extends Validateable { void update(); } } All of this works, however I would like to extend this behavior to Embeddable classes. I know two solutions: call the validation method of the embeddable object manually from the entity validation method: public void persistOrUpdate(){ // validate my own properties first // then manually validate the embeddable property: myEmbeddable.persistOrUpdate(); // this works but I'd like something that I don't have to call manually } use reflection, checking all properties to see if their type is of one of their interface types. This would work, but it's not pretty. Is there a more elegant solution?

    Read the article

  • When I try to pass large amounts of information using jquery $.ajax(post) method. it throws potenti

    - by dotnetrocks
    I am trying to create a preview window for my texteditor in my blog page. I need to send the content to the server to clean up the text entered before I can preview it on the preview window. I was trying to use $.ajax({ type: method, url: url, data: values, success: LoadPageCallback(targetID), error: function(msg) { $('#' + targetID).attr('innerHTML', 'An error has occurred. Please try again.'); } }); Whenever I tried to click on the preview button it returns an XMLHTTPRequest error. The error description - Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. You can disable request validation by setting validateRequest=false in the Page directive or in the configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. The ValidateRequest for the page is set to false. Is there a way I can set validaterequest to false for the ajax call.Please advise Thank you for reading my post.

    Read the article

  • better way of showing File Upload Errors?

    - by coure06
    Model: public class EmailAttachment { public string FileName { get; set; } public string FileType { get; set; } public int FileSize { get; set; } public Stream FileData { get; set; } } public class ContactEmail: IDataErrorInfo { public string Name { get; set; } public string Email { get; set; } public string Message { get; set; } public EmailAttachment Attachment { get; set; } public string Error { get { return null; } } public string this[string propName] { get { if (propName == "Name" && String.IsNullOrEmpty(Name)) return "Please Enter your Name"; if (propName == "Email"){ if(String.IsNullOrEmpty(Email)) return "Please Provide an Email Address"; else if(!Regex.IsMatch(Email, ".+\\@.+\\..+")) return "Please Enter a valid email Address"; } if (propName == "Message" && String.IsNullOrEmpty(Message)) return "Please Enter your Message"; return null; } }} And my controller file [AcceptVerbs(HttpVerbs.Post)] public ActionResult Con(ContactEmail ce, HttpPostedFileBase file) { return View(); } Now the Problem From the form i am getting Name,Email, Message and uploaded file. I can get validation errors automatically for Name,Email,Message using public string this[string propName]. How can i show validation errors if Attachment.FileSize 10000? If i write its code in public string this[string propName] i alwasy getting Attachment null. How can i fill Attachment Object of ContactEmail so that i can manage all errors on same place?

    Read the article

  • RoR ActiveRecord f.select nil method error

    - by sellis6688
    Whenever I use an f.select statement to determine assignment_id(or student_id), and I should get a validation error, I get this error instead of the validation message: You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.collect Extracted source (around line #11): 8: </p> 9: <p> 10: <%= f.label 'Assignment:' %><br /> 11: <%= f.select(:assignment_id, @assignments.collect {|p| [p.ass_num, p.id]})%> 12: </p> 13: <p> 14: <%= f.label 'First Student:' %><br /> My grades model: class Grade < ActiveRecord::Base has_and_belongs_to_many :students belongs_to :assignment validates_presence_of :score, :assignment_id, :student_id validates_numericality_of :score, :greater_than_or_equal_to => 0, :less_than_or_equal_to => 100, :allow_nil => true validates_uniqueness_of :student_id, :scope => :assignment_id end If I use a text_field, I don't get the error... but there's far too many students for that. Neither @assignments nor @students are nil. Any suggestions?

    Read the article

  • Sanitizing User Input with Ruby on Rails

    - by phreakre
    I'm writing a very simple CRUD app that takes user stories and stores them into a database so another fellow coder can organize them for a project we're both working on. However, I have come across a problem with sanitizing user input before it is saved into the database. I cannot call the sanitize() function from within the Story model to strip out all of the html/scripting. It requires me to do the following: def sanitize_inputs self.name = ActionController::Base.helpers.sanitize(self.name) unless self.name.nil? self.story = ActionController::Base.helpers.sanitize(self.story) unless self.story.nil? end I want to validate that the user input has been sanitized and I am unsure of two things: 1) When should the user input validation take place? Before the data is saved is pretty obvious, I think, however, should I be processing this stuff in the Controller, before validation, or some other non-obvious area before I validate that the user input has no scripting/html tags? 2) Writing a unit test for this model, how would I verify that the scripting/html is removed besides comparing "This is a malicious code example" to the sanitize(example) output? Thanks in advance.

    Read the article

  • Django Save Incomplete Progress on Form

    - by jimbob
    I have a django webapp with multiple users logging in and fill in a form. Some users may start filling in a form and lack some required data (e.g., a grant #) needed to validate the form (and before we can start working on it). I want them to be able to fill out the form and have an option to save the partial info (so another day they can log back in and complete it) or submit the full info undergoing validation. Currently I'm using ModelForm for all the forms I use, and the Model has constraints to ensure valid data (e.g., the grant # has to be unique). However, I want them to be able to save this intermediary data without undergoing any validation. The solution I've thought of seems rather inelegant and un-django-ey: create a "Save Partial Form" button that saves the POST dictionary converts it to a shelf file and create a "SavedPartialForm" model connecting the user to partial forms saved in the shelf. Does this seem sensible? Is there a better way to save the POST dict directly into the db? Or is an add-on module that does this partial-save of a form (which seems to be a fairly common activity with webforms)? My biggest concern with my method is I want to eventually be able to do this form-autosave automatically (say every 10 minutes) in some ajax/jquery method without actually pressing a button and sending the POST request (e.g., so the user isn't redirected off the page when autosave is triggered). I'm not that familiar with jquery and am wondering if it would be possible to do this.

    Read the article

  • Store Business Rules in XML Document, Validate afterwards in Java, how?

    - by JavaPete
    Example XML Rules document: <user> <username> <not-null/> <capitals value="false"/> <max-length value="15"/> </username> <email> <not-null/> <isEmail/> <max-length value="40"/> </email> </user> How do I implement this? I'm starting from scratch, what I currently have is a User-class, and a UserController which saves the User object in de DB (through a Service-layer and Dao-layer), basic Spring MVC. I can't use Spring MVC Validation however in our Model-classes, I have to use an XML document so an Admin can change the rules I think I need a pattern which dynamically builds an algorithm based on what is provided by the XML Rules document, but I can't seem to think of anything other than a massive amount of if-statements. I also have nothing for the parsing yet and I'm not sure how I'm gonna (de)couple it from the actual implementation of the validation-process.

    Read the article

  • Rails: update_attribute vs update_attributes

    - by Sam
    Object.update_attribute(:only_one_field, "Some Value") Object.update_attributes(:field1 => "value", :field2 => "value2", :field3 => "value3") Both of these will update an object without having to explicitly tell AR to update. Rails API says: for update_attribute Updates a single attribute and saves the record without going through the normal validation procedure. This is especially useful for boolean flags on existing records. The regular update_attribute method in Base is replaced with this when the validations module is mixed in, which it is by default. for update_attributes Updates all the attributes from the passed-in Hash and saves the record. If the object is invalid, the saving will fail and false will be returned. So if I don't want to have the object validated I should use update_attribute. What if I have this update on a before_save, will it stackoverflow? My question is does update_attribute also bypass the before save or just the validation. Also, what is the correct syntax to pass a hash to update_attributes... check out my example at the top.

    Read the article

  • Required attribute HTML5

    - by Joop
    First of all I will explain how I stumbled into this behavior. Within my web application I am using some custom validation for my form fields. Within the same form I have two buttons. One to actually submit the form and the other to cancel/reset the form. Mostly I use Safari as my default browser. Now Safari 5 is out and suddenly my cancel/reset button didn't work anymore. Every time I did hit the reset button the first field in my form did get the focus. However this is the same behavior as my custom form validation. When trying it with another browser everything just worked fine. I had to be a Safari 5 problem. I changed a bit in my Javascript code and I found out that the following line was causing the problem: document.getElementById("somefield").required = true; To be sure that would be really the problem I created a test scenario: <!DOCTYPE html> <html> <head> <title>Test</title> </head> <body> <form id="someform"> <label>Name:</label>&nbsp;<input type="text" id="name" required="true" /><br/> <label>Car:</label>&nbsp;<input type="text" id="car" required="true" /><br/> <br/> <input type="submit" id="btnsubmit" value="Submit!" /> </form> </body> </html> What I expected would happen did happen. The first field "name" did get the focus automatically. Anyone else stumbled into this?

    Read the article

  • Rails 2.3.4 and jquery form plugin works on development, not in production?

    - by hemajang
    Hello, i'm trying to build a contact form in Rails 2.3.4. I'm using the jQuery Form plugin along with this (http://bassistance.de/jquery-plugins/jquery-plugin-validation/) for validations. Everything works in my development environment (mac os x snow leopard), the loading gif appears and on my log the email gets sent and the "request completed" notice shows. But on my production machine the loading gif just keeps going and the form doesn't get sent. I've waited as long as I could, nothing. Here is my code: /public/javascripts/application.js // client-side validation and ajax submit contact form $('#contactForm').validate( { rules: { 'email[name]': { required: true }, 'email[address]': { required: true, email: true }, 'email[subject]': { required: true }, 'email[body]': { required: true } }, messages: { 'email[name]': "Please enter your name.", 'email[address]': "Please enter a valid email address.", 'email[subject]': "Please enter a subject.", 'email[body]': "Please enter a message." }, submitHandler: function(form) { $(form).ajaxSubmit({ dataType: 'script', beforeSend: function() { $(".loadMsg").show(); } }); return false; } }); I'm using the submitHandler to send the actual ajaxSubmit. I added the "dataType: "script" and the "beforeSubmit" for the loading graphic. def send_mail if request.post? respond_to do |wants| ContactMailer.deliver_contact_request(params[:email]) flash[:notice] = "Email was successfully sent." wants.js end end end Everything works fine on development, but not in production. What am I missing or did wrong?

    Read the article

  • SQL Server express service is not starting

    - by Mahdi Ghiasi
    I've bought my first VPS yesterday, and I have installed Microsoft SQL Server 2012 Express on it. Then I have restarted my VPS. But SQL Server Service didn't start. I've tried to start it manually, but It can't start: What is the problem? How to solve it? P.S: This is my first server management, and I'm a newbie, if you need any further details about this, please leave a comment. I'll update the question. Update 1: This is some log details from Event viewer that I thought that they may be useful for this problem: FCB::Open failed: Could not open file e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\MSDBData.mdf for file number 1. OS error: 3(The system cannot find the path specified.). The resource database build version is 11.00.3000. This is an informational message only. No user action is required. FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf'. Diagnose and correct the operating system error, and retry the operation. Starting up database 'model'. FCB::Open failed: Could not open file e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\model.mdf for file number 1. OS error: 3(The system cannot find the path specified.). FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\modellog.ldf'. Diagnose and correct the operating system error, and retry the operation. I'm confused about these e:\s, my VPS has just one C:\ drive, So what is e:\ ?

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • PXE bootable image for terminal server?

    - by HeavenCore
    We have 300 windows xp machines on cruddy old hardware across the company. With extended support for XP ending April next year we're looking into our options. Couple of options: Replace the 300 PC's with full windows 7 PC's (£100k +?) - no use of terminal server (our current model) Replace the 300 PC's with off the shelf thin clients & make use of our terminal server - Cheaper clients but Terminal Server CALS required? Keep the 300 PC's, replace windows XP with linux thin client capable of connecting to our terminal server - no hardware costs, just Terminal Server CALS required? Keep the 300 PC's - remove hard drives and make use of a PXE bootable "thin client" to connect to our terminal server If we were to choose option 4, what our the options out there? Is there any official PXE bootable thin clients for terminal server out there? If so, what are the licence requirements? Is there options we haven’t considered? There must be lots of companies out there in this situation - curious what the current trend is for this problem? Edit: Option 5 - Create a bootable Windows PE image with RDP auto start and use that as a "thin client" for our terminal server - is Windows PE licence free in such a model?

    Read the article

  • Can I upgrade the CPU in my Lenovo 3000 N100 laptop?

    - by Pavel
    I've got an Intel Core Duo T2300 in my laptop (Lenovo 3000 N100, 0768-49G). Here is what I could find out about it: $ sudo dmidecode # dmidecode 2.11 SMBIOS 2.4 present. 42 structures occupying 1436 bytes. Table at 0x000DC010. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: LENOVO Version: 61ET37WW Release Date: 06/04/07 Address: 0xE6B70 [...] Handle 0x0002, DMI type 2, 8 bytes Base Board Information Manufacturer: LENOVO Product Name: CAPELL VALLEY(NAPA) CRB [...] Handle 0x0004, DMI type 4, 35 bytes Processor Information Socket Designation: U2E1 Type: Central Processor Family: Other Manufacturer: Intel ID: E8 06 00 00 FF FB E9 BF Version: Genuine Intel(R) CPU T2300 @ 1.66GHz Voltage: 3.3 V External Clock: 166 MHz Max Speed: 2048 MHz Current Speed: 1600 MHz Status: Populated, Enabled Upgrade: ZIF Socket L1 Cache Handle: 0x0005 L2 Cache Handle: 0x0006 L3 Cache Handle: Not Provided Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified $ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 14 model name : Genuine Intel(R) CPU T2300 @ 1.66GHz stepping : 8 microcode : 0x39 cpu MHz : 1000.000 cache size : 2048 KB I believe the chipset is "Mobile Intel 945GM Express", but I don't know how to verify it on a Linux system. I'm not sure about the socket, but Intel claims "Sockets Supported: PBGA479, PPGA478". Now, I'd like to upgrade to the fastest compatible CPU available, but I'm a bit lost in all the details. Can you guys help me out with a couple of questions, please? What CPUs can I choose from? (I think it's only the Core2Duo line, but it should be enough for an upgrade) Can I use a 64-bit CPU? Can I use a CPU with a higher FSB than 667 MHz? Do I have to worry about additional cooling, or is it enough to check for similar voltage/TDP values? Thank you!

    Read the article

  • Validating signature trust with gpg?

    - by larsks
    We would like to use gpg signatures to verify some aspects of our system configuration management tools. Additionally, we would like to use a "trust" model where individual sysadmin keys are signed with a master signing key, and then our systems trust that master key (and use the "web of trust" to validate signatures by our sysadmins). This gives us a lot of flexibility, such as the ability to easily revoke the trust on a key when someone leaves, but we've run into a problem. While the gpg command will tell you if a key is untrusted, it doesn't appear to return an exit code indicating this fact. For example: # gpg -v < foo.asc Version: GnuPG v1.4.11 (GNU/Linux) gpg: armor header: gpg: original file name='' this is a test gpg: Signature made Fri 22 Jul 2011 11:34:02 AM EDT using RSA key ID ABCD00B0 gpg: using PGP trust model gpg: Good signature from "Testing Key <[email protected]>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: ABCD 1234 0527 9D0C 3C4A CAFE BABE DEAD BEEF 00B0 gpg: binary signature, digest algorithm SHA1 The part we care about is this: gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. The exit code returned by gpg in this case is 0, despite the trust failure: # echo $? 0 How do we get gpg to fail in the event that something is signed with an untrusted signature? I've seen some suggestions that the gpgv command will return a proper exit code, but unfortunately gpgv doesn't know how to fetch keys from keyservers. I guess we can parse the status output (using --status-fd) from gpg, but is there a better way?

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • Looking for updated BIOS for '99 Gateway in order to format/recognize >127 GB HD

    - by Jeff
    I have a '99 Gateway that's apparently too old for even Gateway to acknowledge it exists. Want to use it as a media hub and put in a 320GB HD, but it will not format above 127GB even running Win XP SP3. Read somewhere that upgrading the BIOS may do the trick, but I can't find the correct BIOS, and GW has been no help. Hoping I can just upgrade the BIOS, which is 11 years old. Any help would be much appreciated! I don't know where to look, and searches have been fruitless. System info: OS Name Microsoft Windows XP Home Edition Version 5.1.2600 Service Pack 3 Build 2600 OS Manufacturer Microsoft Corporation System Name xxxx System Manufacturer Gateway System Model TABOR_II System Type X86-based PC Processor x86 Family 6 Model 7 Stepping 3 GenuineIntel ~596 Mhz BIOS Version/Date Intel Corp. 4W4SB0X0.15A.0015.P10, 9/28/1999 SMBIOS Version 2.1 BIOS info (from a free app I located): BIOS Type: Phoenix BIOS Date: September 28th 1999 BIOS ID: 4W4SB0X0.15A.0015.P10.9909281445-None BIOS OEM: 4W4SB0X0.15A.0015.P10 Chipset: Intel 440BX/ZX rev 3 SuperIO: SMC 70x or 80x rev 0 at port 0370 Manufacturer: Gateway Motherboard: WS440BX

    Read the article

  • HP Storageworks 448 tape drive input/output error with Ubuntu

    - by Dan D
    I'm trying to set a backup to tape of a machine using flexbackup. However any attempt to write to the tape drive (via either flexbackup or just tar) results in "/dev/st0: Input/output error" The machine seems to recognise the drive (HP Storageworks Ultrium 448) and that there's a tape in it and "mt status" seems to work... "mt -f /dev/st0 rewind" or "erase" throw no errors... root@stor001:/# mt status SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 0 bytes. Density code 0x42 (LTO-2). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN root@stor001:/# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: DVDRAM GSA-4084N Rev: KS02 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: HP Model: Ultrium 2-SCSI Rev: S65D Type: Sequential-Access ANSI SCSI revision: 03 "tell" does however root@stor001:/# mt -f /dev/st0 tell /dev/st0: Input/output error Based on a forum post I found, I tried: root@stor001:/# dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 5.0815 s, 2.0 kB/s which gave the person on the forum an error but seems to work for me. If anyone has any suggestions, I'm all ears...

    Read the article

  • What should I do to make sure that IIS does not recycle my application?

    - by AngryHacker
    I have a WCF service app hosted in IIS. On startup, it goes and fetches a really expensive (in terms of time and cpu) resource to use as local cache. Unfortunately, IIS seems to recycle the process on a fairly regular basis. So I am trying to change the settings on the Application Pool to make sure that IIS does not recycle the application. So far, I've change the following: Limit Interval under CPU from 5 to 0. Idle Time-out under Process Model from 20 to 0. Regular Time Interval under Recycling from 1740 to 0. Will this be enough? And I have specific questions about the items I changed: What specifically does Limit Interval setting under CPU mean? Does it mean that if a certain CPU usage is exceeded, the application pool will be recycled? What exactly does "recycled" mean? Is the application completely torn down and started up again? What is the difference between "Worker Process shutdown" and "Application Pool recycling"? The documentation for the Idle Time-out under Process Model talks about shutting down the worker process. While the docs for Regular Time Interval under Recycling talk about application pool recycling. I don't quite grok the difference between the two. I thought the w3wp.exe is the worker process which runs the application pool. Can someone explain the difference to the application between the two? The reason for having IIS7 and IIS7.5 tags is because the app will run in both and hope the answers are the same between the versions. Image for reference:

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >