Search Results

Search found 46749 results on 1870 pages for 'system preferences'.

Page 731/1870 | < Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >

  • Custom session: Window does not capture full screen area by default. 12.04

    - by juzerali
    I am trying to create a custom session by creating a custom.desktop file in /usr/share/xesessions folder. Remember this is not a gnome or some other session. I have created my own application for this session, which are simple. Case 1 Chrome Browser Contents of custom.desktop file [Desktop Entry] Name=Internet Kiosk Comment=This is an internet kiosk Exec=google-chrome --kiosk TryExec= Icon= Type=Application Issue Chrome browser starts in kiosk mode but does not capture complete screen area. Some area is left at the bottom and right side of the screen. Case 2 Custom pyGTK app (Quickly) Contents of custom.desktop file [Desktop Entry] Name=Custom Kiosk Comment=This is a custom kiosk Exec=~/MyCustomPyGTKApp TryExec= Icon= Type=Application Issue My custom pyGTK app has window.fullScreen() in the code. That means it should open in full screen without the window chrome (and it does under the normal session). But that too, leaves lots of space around it. Need Help Can anyone tell me whats going on here. I think its some issue with borders as pointed out at http://www.instructables.com/id/Setting-Up-Ubuntu-as-a-Kiosk-Web-Appliance/?ALLSTEPS in Step 8 If by chance, Google Chromium is not stretched to the edges with the --kiosk switch enabled there is a simple fix. To stretch Chromium simply log in as your regular user and edit chromeKiosk.sh to not have the --kiosk switch. Then log in as the restricted user, click the wrench and choose options. Then on the Personal Stuff tab select Hide system title bar and use compact borders. Close the options screen and stretch Chromium to fit the monitor. Then go back into the options window and set it to Use system title bar and borders. After this is done, log out of your restricted user (might need to just reboot) and log into your regular user. Edit chromeKiosk.sh back to include the --kiosk switch again and Chromium should be full screen next time you log into the restricted user. If I were to use a custom pyGTK or a gtkmm app, how should I get around this issue. window.fullScreen() should occupy the complete screen area. This has to be done programmatically or in some other way that can scale. I have to deploy this on large number of machines located at different geographical areas. Doing it manually on every machine is not possible.

    Read the article

  • Core Parking in Ubuntu?

    - by Xxx Xxx
    Core parking is a new feature that introduced in Windows 7 to get better Battery performance . Depending on the resource use of the operating system it may park one or multiple cores of a multi-core cpu to reduce the computer’s power consumption and thermal emissions. Once operations require more processing power, the parked cores are activated again to assist in the tasks So my question is that is there any way i can do it on Ubuntu 12.04 " Core Parking " ?

    Read the article

  • ASP.NET MVC 3 Hosting :: MVC 2 Strongly Typed HTML Helper and Enhanced Validation Sample

    - by mbridge
    In lue of the off the official release of ASP.NET MVC 2 RTM, I decided I would put together a quick sample of the enhanced HTML.Helpers and validation controls. I am going to use my sample event site where I will have a form so a user can search for information about a certain events. So when the Search page loads the Search action is fired return my strongly typed model. to the view.    1: [HttpGet]    2: public ViewResult Search(): public ViewResult Search()    3: {    4:     IList<EventsModel> result = _eventsService.GetEventList();    5:     var viewModel = new EventSearchModel    6:                         {    7:                             EventList = new SelectList(result, "EventCode","EventName","Select Event")    8:                         };    9:     return View(viewModel);  10: } Nothing special here, although I did want to show how to load up a strongly typed drop down list because that hung me up for a little bit. So to that, I am going to pass back a SelectList to the view and my HTML helper should no how to load this. So lets take a look at the mark up for the view.    1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master"    2: Inherits="System.Web.Mvc.ViewPage<EventsSample.Models.EventSearchModel>" %>    3:     4: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">    5:     Search    6: </asp:Content>    7:     8: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">    9:   10:     <h2>Search for Events</h2>  11:   12:     <% using (Html.BeginForm("Search","Events")) {%>  13:         <%= Html.ValidationSummary(true) %>  14:          15:         <fieldset>  16:             <legend>Fields</legend>  17:              18:             <div class="editor-label">  19:                 <%= Html.LabelFor(model => model.EventNumber) %>  20:             </div>  21:             <div class="editor-field">  22:                 <%= Html.TextBoxFor(model => model.EventNumber) %>  23:                 <%= Html.ValidationMessageFor(model => model.EventNumber) %>  24:             </div>  25:              26:             <div class="editor-label">  27:                 <%= Html.LabelFor(model => model.GuestLastName) %>  28:             </div>  29:             <div class="editor-field">  30:                 <%= Html.TextBoxFor(model => model.GuestLastName) %>  31:                 <%= Html.ValidationMessageFor(model => model.GuestLastName) %>  32:             </div>  33:              34:             <div class="editor-label">  35:                 <%= Html.LabelFor(model => model.EventName) %>  36:             </div>  37:             <div class="editor-field">  38:                 <%= Html.DropDownListFor(model => model.EventName, Model.EventList,"Select Event") %>  39:                 <%= Html.ValidationMessageFor(model => model.EventName) %>  40:             </div>  41:              42:             <p>  43:                 <input type="submit" value="Save" />  44:             </p>  45:         </fieldset>  46:   47:     <% } %>  48:   49:     <div>  50:         <%= Html.ActionLink("Back to List", "Index") %>  51:     </div>  52:   53: </asp:Content> A nice feature is the scaffolding that MVC has to generate code. I simply right clicked inside my Search() action, inside the EventsController and selected “Add View” and then I selected my strongly typed object that I wanted to pass to the view and also selected that I wanted the content type be “Edit”. With that the aspx page was completely generated, although I did have to go back in and change the textbox for the Event Names to a drop down list of the names to select from. The new feature with MVC 2 are the strongly typed HTML helpers. So now, my textboxes, drop down list, and validation helpers are all strongly typed to my model.  This features gives you the benefits of intellisense and also makes it easier to debug. “The Gu” has a great post about the feature in case you want more details. The DropDownListFor function to generate the drop down list was a little tricky for me. You first need to use a Lanbda expression to pass in the property you want the selected value assigned to in your model, and then you need to pass in the list directly from the model. Validations To validate the form, you can use the strongly type validation HTML helpers which will inspect your model and return errors if the validation fails. The definitions of these rules are set directly on the Model itself so lets take a look.    1: using System.ComponentModel.DataAnnotations;    2: using System.Web.Mvc;    3:     4: namespace EventsSample.Models    5: {    6:     public class EventSearchModel    7:     {    8:         [Required(ErrorMessage = "Please enter the event number.")]    9:         [RegularExpression(@"\w{6}",  10:             ErrorMessage = "The Event Number must be 6 letters and/or numbers.")]  11:         public string EventNumber { get; set; }  12:   13:         [Required(ErrorMessage = "Please enter the guest's last name.")]  14:         [RegularExpression(@"^[A-Za-zÀ-ÖØ-öø-ÿ1-9 '\-\.]{1,22}$",  15:             ErrorMessage = "The gueest's last name must 1 to 20 characters.")]  16:         public string GuestLastName { get; set; }  17:   18:         public string EventName { get; set; }  19:         public SelectList EventList { get; set; }  20:     }  21: } Pretty cool! Okay, the only thing left to do is perform the validation in the POST action.    1: [HttpPost]    2: public ViewResult Search(EventSearchModel eventSearchModel)    3: {    4:     if (ModelState.IsValid) return View("SearchResults");    5:     else    6:     {    7:          IList<EventsModel> result = _eventsService.GetEventList();    8:         eventSearchModel.EventList = new SelectList(result, "EVentCode","EventName");   9:   10:         return View(eventSearchModel);  11:     }  12: }  13:     } If the form entries are valid, here I am simply displaying the SearchResult, but in a real world sample I would also go out get the results first. You get the idea though. In my case, when the form is not valid, I also had to reload my SelectList with the event names before I loaded the page again. Remember this is MVC, no _VieState here :) So that’s it. Now my form is validating the data and when it fails it looks like this.

    Read the article

  • New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    - by Alexandra Georgescu
    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options. Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1). Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled: In order to activate its usage, corresponding statements could be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads: TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ?ENABLE | DISABLE Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed. The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged. TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings. A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available: In Administration Services selecting Replay Transactions on the right-click menu on the database: Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time. Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database: These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed. This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater. Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions. But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory: ARBORPATH/app/appname/dbname/Replay These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is: TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION] While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following: Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions. Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded. You can find more detailed information in the following documents: Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1) Oracle Essbase Online Documentation (rel. 11.1.2.1)) Enterprise Performance Management System Documentation (including previous releases) Or on the Oracle Technology Network. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected]. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis. Disclaimer: All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Read the article

  • OS evaluate in bash script

    - by moata_u
    i was thinking in way that before run my script , evaluate which operating system that user use ubuntu or solaris , am using this because there is some differences in command option in each OS such as sed .. , i was trying the following : sysEval=`grep "ubuntu" | uname -a` if [ sysEval ]; then .......some command else ....... some command fi NOTE That my script will run only in ubuntu or solaris seems not working !

    Read the article

  • This Week on the Green Data Center Management Front

    Among the big news this week in green data center management: Horizon Data Center Solutions announces a partnership with Power Loft, Cisco, GDSYS, and Incheon Metropolitan City partner to build a next-generation green data center, and Vycon Energy demonstrates how its flywheel backup power systems can be used for health-care data system power backup.

    Read the article

  • SAF Evaluation part II the Formal Methods

    OnI talked about evaluating a candidate architecture in code. This post is dedicated to evaluation on paper.I remember one system I was working on, I was keen on making the architecture asynchronous and message oriented (it was all circa 2001 by the way) However, I was new on the team and my role (as the [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why Mac OS X is referred to as the developer's OS? [closed]

    - by dbramhall
    Possible Duplicate: Why do programmers use or recommend Mac OS X? I have heard people referring to Mac OS X as the 'developer's operating system' and I was wondering why? I have been using Mac OS X for years but I only see Mac OS X as a developer's OS if the developer tools are installed, without them it's not really a developer's OS. Also, the Terminal is obviously a huge plus for developers but is this it?

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • ASP.NET MVC in Action: The model in depth

    In this chapter, we’ll explore a model for a system that helps to manage a small conference, like a Code Camp. The model enables the application to provide an interesting service. Without the model, the application provides no value. We place great importance on creating a rich model with which our controllers can work. Presented By: NEC   Ads by Pheedo

    Read the article

  • Running Teamsite User Admin tool IWUSERADM.exe from ASP.NET

    - by Narendra Tiwari
    It has really been a head scratching task for me. I 've tried many options but nothing worked. Finally I found a workaround on google to achive this by TaskScheduler. PROBLEM When we run Teamsite user administration command line tool IWUSERADM.exe though ASP.Net it gives following error: Application popup: cmd.exe - Application Error : The application failed to initialize properly (0xc0000142). Click on OK to terminate the application. CAUSE No specific cause, it seems to be a bug, supposed to be resolved with this Microsoft patch http://support.microsoft.com/kb/960266. and there is nothing related to permission issue, y web application is impersonated with an administrator account. off course running a bat file from dmin account is a potential secury threat but for this scenario lets conifned our discussion to run the command line tool. RESOLUTION I have not tried this patch as I have not permitted to run this patch on server. Below are the steps to achive the requirement. 1/ Create a batch file which runs the IWUSERADM.exe.         echo - Add Teamsite User    CD E:\Appli\GN00\iw-home\bin    iwuseradm add-user %1 2/ Temporarily create a schedule task and run  the .bat file by scheduled task by ASP.Net code using TaskScheduler http://www.codeproject.com/KB/cs/tsnewlib.aspx. 3/ Here is the function: private int AddTeamsiteUser(string strBatchFilePath, string strUser) { //Get a ScheduledTasks object for the local computer. ScheduledTasks st = new ScheduledTasks(); // Create a task Task t; try{ t = st.CreateTask("~AddTeamsiteUser"); } catch { throw new Exception("Schedule Task ~AddTeamsiteUser already exist."); }    t.SetAccountInformation(yourLogin, yourPassword); //Set the account under which the task should run.  t.Save();  t.Run(); Thread.Sleep(2000); //for sync issue //Remove the scheduled task st.DeleteTask("~AddTeamsiteUser"); return t.ExitCode;   Below are few resources related to the above scenario:- - Task Scheduler Class Library for .NET  http://www.codeproject.com/KB/cs/tsnewlib.aspx - Run a .BAT file from ASP.NET  http://codebetter.com/blogs/brendan.tompkins/archive/2004/05/13/13484.aspx - TaskScheduler Class  http://msdn.microsoft.com/en-us/library/system.threading.tasks.taskscheduler.aspx - Application Hangs whle running iwuseradm.exe through ASP.Net  http://bytes.com/topic/asp-net/answers/733098-system-diagnostics-process-hangs     t.ApplicationName = strBatchFilePath; t.Parameters = strUser; t.Comment = "Adding user to Teamsite Application"

    Read the article

  • Non-English Character Display in Oracle SQL Developer

    - by thatjeffsmith
    I get a variation on this question at least once a week, if not more frequently. I’m from Israel, and the language on the databases is Hebrew. When I use the old and deprecated SQL*Plus (windows rich client) I can see the hebrew clearly, when I use the latest SQL Developer, I get gibberish. This question appears on the forums about every week or so as well. So what’s the deal? Well, it starts with a basic misunderstanding of NLS Client parameters. These should accurately reflect the language and locality setup on your LOCAL machine. DO NOT COPY what’s set in the database. The these parameters work together with the database so that information can be transferred back and forth correctly. Having the wrong NLS parameters locally can be bad. [ORACLE DOCS]Setting the NLS_LANG parameter properly is essential to proper data conversion. The character set that is specified by the NLS_LANG parameter should reflect the setting for the client operating system. Setting NLS_LANG correctly enables proper conversion from the client operating system character encoding to the database character set. When these settings are the same, Oracle Database assumes that the data being sent or received is encoded in the same character set as the database character set, so character set validation or conversion may not be performed. This can lead to corrupt data if conversions are necessary. OK, so what are you supposed to do? Set the Font! 9 times out of 10, this preference fixes the problem with display issues. Make sure you set a Font that supports the characters you’re trying to display. It’s as simple as that. This preference defines the font used to display characters in the editors and the data grids. If you have it set to a font that doesn’t have Hebrew character support – you’re not going to see Hebrew in SQL Developer. A few years ago…wow, like 15 years ago, I learned that the Tohama Font is pretty Unicode-friendly. Bad Font Selection A Font that’s not non-English friendly Good Font Selection Exact same text, except rendered with the Tahoma font Summary Having problems seeing non-English text in SQL Developer? Check the font! And do not start messing with NLS parameters without talking to your DBA first.

    Read the article

  • DTP in Linux with Pagestream

    <b>Ghacks:</b> "Remember Amiga? Well, if you're old enough to remember that platform, then you might remember the Pagestream desktop publishing system. Pagestream began in 1986 as Publishing Partner for the Atari Computers."

    Read the article

  • How can I use SSMTP to send through Gmail and accept relay requests from another machine?

    - by Kris Anderson
    I'm running sSMTP on Ubuntu 12.04 and 14.04 and it's sending emails perfectly through Gmail. However, I'm also running an OmniOS operating system (a spinoff of Solaris) and it doesn't handle TLS which means it can't connect and send through gmail. What I'd like to do is point the OmniOS machine to one of my local Ubuntu machines, and have the Ubuntu machine relay all of the email for the OmniOS machine. How can I accomplish this?

    Read the article

  • Let the RAM improves performance

    - by user1717079
    I have a low profile machine but with a lot of fast RAM, 4 Gb, which is really an amount of memory that i probably will never use, not even an half, since i just use this machine for coding and browsing the web. The HDD is really slow and so the overall performance are bad when booting, caching or starting new program, I'm wondering if Ubuntu can provide some setting or utility to solve this situation and let my system rely more on the RAM usage.

    Read the article

  • T-SQL Tuesday #13: Clarifying Requirements

    - by Alexander Kuznetsov
    When we transform initial ideas into clear requirements for databases, we typically have to make the following choices: Frequent maintenance vs doing it once. As we are clarifying the requirements, we need to determine whether we want to concinue spending considerable time maintaining the system, or if we want to finish it up and move on to other tasks. Race car maintenance vs installing electric wiring is my favorite analogy for this kind of choice. In some cases we need to sqeeze every last bit...(read more)

    Read the article

  • Subscription service or software to handle a Magazine's PDF

    - by Paolo
    I'm looking for an installable or hosted software (service) to handle the process of public users subscribing to the Magazine and receiving the PDF automatically upon an admin upload the new one. The system will have to: handle the money part (PayPal&Co. are OK) let user buy old issues of the Magazine warn user on subscription expiring, etc. PDF stamping and WordPress integration (user credential sharing, page access of subriscrebed goods, etc) will be a big plus.

    Read the article

  • VS 2010 ALM Whitepapers &ndash; link from Neno Loje

    - by johndoucette
    Overview of Visual Studio ALM Whitepapers by Microsoft Overview Visual Studio 2010 Quick Reference Guidance Installation, Configuration & Administration Team Foundation Installation Guide for Visual Studio Team System 2010 Administration Guide for Microsoft Visual Studio 2010 Team Foundation Server Visual Studio 2010 TFS Upgrade Guide Visual Studio 2010 and Team Foundation Server 2010 VM Factory TFS Integration Platform Visual Studio 2010 Licensing White Paper Requirements Visual Studio 2010 Team Foundation Server Requirements Management Guidance Version Control & Configuration Management Visual Studio TFS Branching Guide 2010 Some guide or whitepaper missing? Let me know!

    Read the article

  • Hierarchy flattening of interfaces in WCF

    - by nmarun
    Alright, so say I have my service contract interface as below: 1: [ServiceContract] 2: public interface ILearnWcfService 3: { 4: [OperationContract(Name = "AddInt")] 5: int Add(int arg1, int arg2); 6: } Say I decided to add another interface with a similar add “feature”. 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract(Name = "AddDouble")] 5: double Add(double arg1, double arg2); 6: } My class implementing the ILearnWcfServiceExtend ends up as: 1: public class LearnWcfService : ILearnWcfServiceExtend 2: { 3: public int Add(int arg1, int arg2) 4: { 5: return arg1 + arg2; 6: } 7:  8: public double Add(double arg1, double arg2) 9: { 10: return arg1 + arg2; 11: } 12: } Now when I consume this service and look at the proxy that gets generated, here’s what I see: 1: public interface ILearnWcfServiceExtend 2: { 3: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfService/AddInt", ReplyAction="http://tempuri.org/ILearnWcfService/AddIntResponse")] 4: int AddInt(int arg1, int arg2); 5: 6: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/AddDouble", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/AddDoubleResponse")] 7: double AddDouble(double arg1, double arg2); 8: } Only the ILearnWcfServiceExtend gets ‘listed’ in the proxy class and not the (base interface) ILearnWcfService interface. But then to uniquely identify the operations that the service exposes, the Action and ReplyAction properties are set. So in the above example, the AddInt operation has the Action property set to ‘http://tempuri.org/ILearnWcfService/AddInt’ and the AddDouble operation has the Action property of ‘http://tempuri.org/ILearnWcfServiceExtend/AddDouble’. Similarly the ReplyAction properties are set corresponding to the namespace that they’re declared in. The ‘http://tempuri.org’ is chosen as the default namespace, since the Namespace property on the ServiceContract is not defined. The other thing is the service contract itself – the Add() method. You’ll see that in both interfaces, the method names are the same. As you might know, this is not allowed in WSDL-based environments, even though the arguments are of different types. This is allowed only if the Name attribute of the ServiceContract is set (as done above). This causes a change in the name of the service contract itself in the proxy class. See that their names are changed to AddInt / AddDouble respectively. Lesson learned: The interface hierarchy gets ‘flattened’ when the WCF service proxy class gets generated.

    Read the article

  • SQLAuthority News Free Download Microsoft SQL Server 2008 R2 RTM Express with Management Tools S

    This blog post is in response to several inquiry about Free Download of SQL Server 2008 R2 RTM. Microsoft has announced SQL Server 2008 R2 as RTM (Release To Manufacture). Microsoft SQL Server 2008 R2 Express is a powerful and reliable data management system that delivers a rich set of features, data protection, and performance [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 5 Free Intrusion Detection Softwares (IDS)

    Tools and Utilities to Monitor Your Network For Suspicious or Malicious Activity Snort for Windows Snort is an open source network intrusion detection system, capable of performing real-time traffi... [Author: Alam Je - Computers and Internet - March 24, 2010]

    Read the article

  • SCOM, 90 days in, III. Stuff to Add

    - by merrillaldrich
    This is the third installment of a series on our deployment of System Center at my workplace, emphasis on SQL Server MP. At this point we’ve got Operations Manager installed, and up and running, and we’ve been able to categorize all the monitored servers into production, preproduction, test and DR using groups that have dynamic membership rules. We’ve got the SQL management pack working with out-of-the-box settings, and used it to locate all the SQL Server stack services like the engine, reporting...(read more)

    Read the article

  • Upgrade problem - "dependency problems prevent configuration of libnih-dbus1"

    - by raycho
    I have a problem with the upgrading.... When i write sudo dpkg --configure -a , this is what happens... : dependency problems prevent configuration of libnih-dbus1: libnih-dbus1 depends on libnih1 (= 1.0.3-4ubuntu9); however: Version of libnih1 on system is 1.0.3-4ubuntu2. libnih-dbus1 depends on libc6 (>= 2.3.4); however: Package libc6 is not installed. dpkg: error processing libnih-dbus1 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libnih-dbus1 Please help

    Read the article

< Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >