Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 897/1620 | < Previous Page | 893 894 895 896 897 898 899 900 901 902 903 904  | Next Page >

  • Design suggestions needed to create a MathBuilder framework

    - by Darf Zon
    Let explain what I'm trying to create. I'm creating a framework, the idea is to provide base classes to generate a math problem. Why do I need this framework? Because at first time, I realized when I create a new math problem I always do the same steps. Configuration settings such range numbers. For example if I'm developing multiplications, in beginner level only generate the first number between 2-5 or in advanced level, the first number will be between 6- 9, for example. Generate problem method. All the time I need to invoke a method like this to generate the problem. This one receives the configuration settings and generate the number according to them. And generate the object with the respective data. Validate the problem. Sometimes the problem generated is not valid. For example, supposed I'm creating fractions in most simplified, if I receive 2/4, the program should detect that this is not valid and must generate another like this one, 1/4. Load the view. All of them, have a custom view (please watch below the images). All of the problems must know how to CHECK if the user result is correct. All of this problems has answers. Some of them just require one answer, anothers may require more than one, so I guess a way to maintain flexibility to the developer has all the answers he wanna used. At the beginning I started using PRISM. Generate modules for each math problem was the idea and load it in the main system. I guess are the most important things of this idea. Let me showing some problems which I create in a WPF standalone program. Here I have a math problem about areas. When I generate the problem a set to the view the object and it draw it. In beginner level, I set in the configuration settings that just load square types. But in advance level, can load triangles and squares randomly. In this another, generate a binary problem like addition, subtraction, multiplication or division. Above just generate a single problem. The idea of this is to show a test o quiz, I mean get a worksheet (this I call as a collection of problems) where the user can answer it. I hope gets the idea with my ugly drawing. How to load this math problems? As I said above, I started using PRISM, and each module contains a math problem kind. This is a snapshot of my first demo. Below show the modules loaded, and center the respective configurations or levels. Until momment, I have no idea to start creating this software. I just know that I need a question | problem class, response class, user class. But I get lost about what properties should have to contain in it. Please give a little hand of this framework. I put much effort on this question, so if any isn't clear, let me know to clarify it.

    Read the article

  • How to open a file or folder from Terminal

    - by Victor Hugo Souza
    I'd like to know if there is a way to open a file or a folder from terminal? When I wrote a URL LINK in terminal, it's allows me to open that link on my default browser. So I'd like to do the same with my files and folders. I know that there is a way via cli using gnome-open or xdg-open, but I'd like a solution that use mouse by clicking on the path or the url. Eg. When I write "pwd" the path allows me to click and open on Nautilus It's the inverse of what "nautilus-open-terminal" does.

    Read the article

  • What GUI are there for Axel or for other such downloaders that use multiple connections?

    - by cipricus
    In order to enjoy my maximum download speed, I use and like Axel very much, but from time to time I download multiple files and having so many windows opened has some disadvantages. I use Axel with FlashGot in Firefox (Seamonkey etc) but I would like to add a GUI for that, and possibly have multiple downloads in a nice list as in any civil downloader. I am not aware of a GUI for Axel that works. Axel-kapt crashes. (A question on how to use it properly in Ubuntu got only one somewhat dismissive answer by yours truly...) Gaxel just opens a window with empty fields that I have to manually fill (which beats the purpose). I would like to know how to install something like Gwget which is described here, in an old answer as an alternative (but Gwget itself might be too old too). Help!

    Read the article

  • How to install and make development use of gtk+ on Ubuntu 12.04

    - by el10780
    I want to install gtk+ in order to make an application with a graphical environment.I went to the official website of gtk(gtk.org) and I tried to follow the instructions,but I wasn't able to make it work.I think that I have successfully managed to do the whole process till the part that I had to run make install and after that ldconfig.So far so good.After that though the complete chaos.I do not know how to set all these configurations so I can make my compilers know where to look for the include files.I just want to make it work in my Ubuntu when I start making a source file and then compile it and run it.

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Should I avoid SharePoint Development in Visual Studio?

    - by SaphuA
    Hello, Not long ago I started an internship at a company that supplies SharePoint consultancy, hosting and development. While their consultancy seems to be pretty good and solid, I feel their development department lacks direction. The reason for this, most likely, is that they stopped outsourcing not too long ago. One thing that I've frequently bumped my head into is the following: My supervisor strongly insists that everything that can be done natively in SharePoint (somehow this includes editing xslt files in Designer) should be done in SharePoint. Even if this results in longer development time (at least when they make me write XSLT) and reduced usability. Her main arguments for this are: Better maintainability Editing the functionality doesn't require programming knowledge I feel the company is a little biassed and I am unable to get a decent discussion going. This is why I am looking for other places to get some responses on the subject (and not only on the arguments of my supervisor, but more on the subject in general). Kind regards

    Read the article

  • How to display reboot required user notification after installing a custom package in linux?

    - by user284588
    After installing a custom package I should force a reboot of the system. I looked at couple of solutions to this use notify-send to display user notification followed by a reboot command, which did work as planned. But the user notification is only shown when I install the package from command line and not when I installed through Software Center. I came across some posts where they suggested adding the following to the postinst script [ -x /usr/share/update-notifier/notify-reboot-required ] && \ /usr/share/update-notifier/notify-reboot-required || true Tried including the above in the postinst script but all it does is updating the two files /var/run/reboot-required.pkgs and /var/run/reboot-required with restart information. It neither displayed user-notification nor rebooted the system after package is installed. Is there a way to display reboot required user notification in Ubuntu/Fedora/Open SUSE ?

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Read Only usb stick that won't let me do anything to it

    - by Jonathon
    Somehow I messed up and accidentally made my usb stick into a read only file system. I have tried a bunch of things to delete the files, including the basic (rm -f myfile) and attempting to allow writing (sudo chmod +w myfile) and then deleting, but none of this seems to work. Any ideas on what I can do. I don't have anything on the usb stick that I need, but I don't want to throw away an otherwise perfectly good piece of equipment. How can I make it work? Am I going about this completely the wrong way?

    Read the article

  • My website directories downloads instead of actually opening up from browser

    - by numerical25
    I added some screencast to show what I am having issues with http://screencast.com/t/212t3ANINqk http://screencast.com/t/bR44U1wkvNZl http://screencast.com/t/iDS7APYYsa but the page downloads my subdirectories instead of opening them up and displaying the index file of that page Here is the situation. I am trying to get my web service up using mac ports and I am just trying to configure all the files. I am using php, apache, etc. the page goes to the localhost root but anything beyond that. it can not find. edit Ive tried to add the following to httpd.conf within the <IfModule mime_module> but no hope AddType application/x-httpd-php .php AddType application/x-httpd-php .phtml AddType application/x-httpd-php .php3 AddType application/x-httpd-php .php4 AddType application/x-httpd-php .html AddType application/x-httpd-php-source .phps

    Read the article

  • How to produce assets effectively on large Flash game projects?

    - by Antoine Lassauzay
    I have been working on Flash games professionally for two years now and somehow, having our artists producing assets the right way is one of our biggest challenge. More precisely, it is very hard to have them following any kind of structure and/or standards, nor taking into consideration performance. I would say also the most of our issues concerns UI and related animations. Our current workflow is (on a Facebook hidden object game) : Artists produce PSD and animate prototypes in Flash Artists re-organize their FLA files to be a bit more "programmer friendly" Programmers retouches assets until they have the right structure and export classes inside a SWC, from Flash Programmers try to improve performances, sometimes degrading the quality of game graphics Our main idea is to hire somebody dedicated to prepare assets for programmers but I am really looking forward to improving the pipeline. I was wondering if you guys have tips of any kind to improve this workflow, whether it be team organization, training, tools or tips with Flash. Any explanation on your asset pipeline is well appreciated too.

    Read the article

  • trouble setting up dns for vps

    - by Zenph
    I had registered a domain at namecheap and forwarded the dns to my host at vps.net. The strange thing is, when I did that the site was showing up. I even uploaded files and everything was displaying correctly on my new domain. Now, it is just the namecheap holding page again. I have no idea why this is happening as I haven't touched the configuration since it was working. Could anyone point me in the right direction? When I enter http://domain.com it redirects to http://www.domain.com and the namecheap holding page is shown. Prior to this domain.com was showing what the host was serving. I am completely lost and have no idea where to start so I'd appreciate any help I can get.

    Read the article

  • Starting to create a MathBuilder framework, help to start creating the design

    - by Darf Zon
    Let explain what I'm trying to create. I'm creating a framework, the idea is to provide base classes to generate a math problem. Why do I need this framework? Because at first time, I realized when I create a new math problem I always do the same steps. Configuration settings such range numbers. For example if I'm developing multiplications, in beginner level only generate the first number between 2-5 or in advanced level, the first number will be between 6- 9, for example. Generate problem method. All the time I need to invoke a method like this to generate the problem. This one receives the configuration settings and generate the number according to them. And generate the object with the respective data. Validate the problem. Sometimes the problem generated is not valid. For example, supposed I'm creating fractions in most simplified, if I receive 2/4, the program should detect that this is not valid and must generate another like this one, 1/4. Load the view. All of them, have a custom view (please watch below the images). All of the problems must know how to CHECK if the user result is correct. All of this problems has answers. Some of them just require one answer, anothers may require more than one, so I guess a way to maintain flexibility to the developer has all the answers he wanna used. At the beginning I started using PRISM. Generate modules for each math problem was the idea and load it in the main system. I guess are the most important things of this idea. Let me showing some problems which I create in a WPF standalone program. Here I have a math problem about areas. When I generate the problem a set to the view the object and it draw it. In beginner level, I set in the configuration settings that just load square types. But in advance level, can load triangles and squares randomly. In this another, generate a binary problem like addition, subtraction, multiplication or division. Above just generate a single problem. The idea of this is to show a test o quiz, I mean get a worksheet (this I call as a collection of problems) where the user can answer it. I hope gets the idea with my ugly drawing. How to load this math problems? As I said above, I started using PRISM, and each module contains a math problem kind. This is a snapshot of my first demo. Below show the modules loaded, and center the respective configurations or levels. Until momment, I have no idea to start creating this software. I just know that I need a question | problem class, response class, user class. But I get lost about what properties should have to contain in it. Please give a little hand of this framework. I put much effort on this question, so if any isn't clear, let me know to clarify it.

    Read the article

  • Visual Web Developer 2010 Express, automated testing, and SVN

    - by Mr. Jefferson
    We have an HTML designer who is not a developer but needs to modify .aspx files from our ASP.NET 2.0 projects from time to time in order to get CSS to work properly with them. Currently, this involves giving her the .aspx page by itself, which she opens and edits via Visual Studio 2008 (her computer used to be a developer's). I'm considering getting her set up with Visual Web Developer 2010 Express and Subversion access so she can be more independent, but I wanted to make sure VS Express will work properly with what we do. So: Does VWD 2010 Express support automated tests? If no to the above, what happens when it opens a solution file that includes a test project, modifies it, and saves it? Are there any potential snags with setting up AnkhSVN with VWD 2010 Express?

    Read the article

  • google-chrome video application association

    - by Ben Lee
    Is there any way to tell google-chrome to launch video files of particular types in an external application (or even just to bring up the download box as if the type was un-handled), instead of showing the video inside the browser? Searching online, it seems that chrome is supposed to use xdg-mime for file associations, but apparently is ignoring this for video. For example, when I do: xdg-mime query default video/mpeg It returns dragonplayer.desktop. But when I click on a mpeg video link, chrome displays it internally instead of launching Dragon Player (if I double click on a mpeg file in my file manager, on the other hand, it does open Dragon Player). So is there a way to tell chrome to respect this setting, or another way to coax chrome into opening the file externally? If it matters, I'm running the latest version of google-chrome stable (not chromium) at the time of writing, v. 18.0.1025.151, on kubuntu 11.10.

    Read the article

  • Is there an established convention for separating Windows file names in a string?

    - by Heinzi
    I have a function which needs to output a string containing a list of file paths. I can choose the separation character but I cannot change the data type (e.g. I cannot return a List<string> or something like that). Wanting to use some well-established convention, my first intuition was to use the semicolon, similar to what Windows's PATH and Java's CLASSPATH (on Windows) environment variables do: C:\somedir\somefile.txt;C:\someotherdir\someotherfile.txt However, I was surprised to notice that ; is a valid character in an NTFS file name. So, is the established best practice to just ignore this fact (i.e. "no sane person should use ; in a file name and if they do, it's their own fault") or is there some other established character for separating Windows paths or files? (The pipe (|) might be a good choice, but I have not seen it used anywhere yet for this purpose.)

    Read the article

  • BASH Scripting: Check If running with sudo/superuser, if not, dont run, return error

    - by EvilPhoenix
    This is something I've been curious about. I make a lot of small bash scripts (.sh files) to do tasks that I routinely do. Some of those tasks require everything to be ran as superuser. I've been curious: Is it possible to, within the BASH script prior to everything being run, check if the script is being run as superuser, and if not, print a message saying You must be superuser to use this script, then subsequently terminate the script itself. The other side of that is I'd like to have the script run when the user is superuser, and not generate the error. Any ideas on coding (if statements, etc.) on how to execute the aforementioned?

    Read the article

  • Nexus 7 (4.2.2) stuck as read-only on Ubuntu 13.04 (PC)

    - by Dalladubb
    I have a Nexus 7 running the latest Android (4.2.2) that seems to be stuck as read-only. I cannot transfer any files to or from the device though I am free to look through it. Permissions are: View Content: Only Owner Change Content: Nobody Access Content: Nobody And when I try to change the permission I get this error: Operation not supported by backend I'm baffled. This is a stock install of Ubuntu on my PC and the install isn't that old. Am I missing a lib or something? I feel the need to say it works fine on Windows 7. Thanks for looking.

    Read the article

  • How to install gnome shell extensions offline?

    - by nosklo
    I know how to go to the https://extensions.gnome.org/ website and download gnome-shell extensions, but now I need to install some extensions available there on a computer without any internet access at all. It is in a internal corporate network and there's no way I can get outside internet access on it, so I must find another way. I can copy files in a usb disk. At my home computer, I have found my extensions at ~/.local/share/gnome-shell/extensions/ but just copying this folder to the target corporate computer didn't do the trick. Running gnome-tweak-tool gives me a "Install Shell Extension" button but I don't know how to download an extension in a format acceptable to install using this button. I have tried to point to the folder above but it didn't work either. What do I need to do?

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Sysadmin Nightmares – Server Room Disasters [Videos]

    - by Asian Angel
    There you are, looking at a pristine server room when disaster suddenly strikes! Whether it is fire, floods, or other causes you will feel sympathy for the sysadmins involved when watching this collection of seven server room disasters that Wired has put together. You can view the other six videos in the collection by visiting the Wired post linked below… Server Snuff: 7 Videos of a Sysadmin’s Worst Nightmares [via Fail Desk] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • Playing Blu-ray using VLC

    - by Kevin
    I've been searching for a way to play Blu-ray using VLC on Ubuntu 12.04 64bit and all of them tell me that I've to somehow rip the disk into .mkv format then play it. But today, I found a page with directions on how to play Blu-rays directly on VLC. I've already finished downloading and pasting the files it told me to do, But it didn't work for me. Then I found a page where someone had tried it and it worked for them. Does anyone know how they got it to work? The page with the directions. The page where it worked. I also found this Does anyone know what it is talking about? I've already downloaded lxBDplayer, but I encountered a problem when installing the lcbdaacs plugins. Are there any other ways to install the lxbdaacs plugin?

    Read the article

  • I need to develop a parser. Can I use Lex and Yacc for the purpose?

    - by Scrooge
    I need to extract very particular data from log files(of different types and formats). Since I am a recent college passout; my mind ran to using Lex and Yacc for the purpose. Now I have the following Questions 1. Will it be legal to do so ? (This product I am working for belongs to one of the biggest tech companies in the world.) 2. Also ; I would like to know if I am being too afraid to write my own parser? 3. How can I use Lex and Yacc if my product is Windows based? Please tell me if you need any clarification or extra information.

    Read the article

  • Low level Linux graphics

    - by math4tots
    For educational purposes, I'd like to write an application on a Linux environment that can process keyboard events and draw graphics without huge dependencies like X or SDL. I presume that this must be possible, because X and SDL are just programs themselves, so they must rely on other methods inherent to the environment. Is this understanding correct? If so, where might I learn to write such a program? My limited experience tells me that it would involve making calls to the kernel, and/or writing to special files; however, I haven't been able to find any tutorials on the matter (I am not even sure what to Google). Also, in case it is relevant, I am running Debian Squeeze on Virtualbox. I have used a netinst cd without networking, so there isn't much installed on it currently. I will install gcc, but I am hoping I can get by with nothing more.

    Read the article

  • Database .NET

    - by Guilherme Cardoso
    Database .NET is an awesome tool that allow us manage several database in simultaneous (SQL Server, MySQL, Oracle, etc). What lead me to install this tool was an problem that must be shared by must developers. Tools like SQL Server Management Studio consume to many resources, and if you don't have a decent computer you'll get problems in performance, and that's my case! With Database .NET we can access an SQL Server for example, and make several actions to database (crud operations, manage procedures, etc).Of course that this tool can replace the use of SQL Server Management Studio for example!  But it's really usefull if  you just need to perform small operations because it consumes many fewer resources. This tool don't need to be installed (it can be used as an portable application). One tip: if you are using SQL Server Express for example, don't forget to check the server name in Database .NET connection. In my case i've to change from GUILHERM-196634 to GUILHERM-196634\SQLExpress. Project: http://fishcodelib.com/Database.htm Download: http://fishcodelib.com/files/DatabaseNet3.zip

    Read the article

< Previous Page | 893 894 895 896 897 898 899 900 901 902 903 904  | Next Page >