Search Results

Search found 19690 results on 788 pages for 'result partitioning'.

Page 396/788 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Using JuJu with private Openstack cloud deployment?

    - by user76054
    I'm seeing a number of problems trying to use JuJu with our internally deployed Openstack cloud. Most of this appears to be centered around DNS host resolution as well as the need to deal with our company's internal HTTP proxies. Our Openstack deployment relies upon an unroutable 172.16.0.0/12 block of addresses for VLAN allocation to each project (tenant) hosted on our internal cloud. User's have the option of assigning one or more floating addresses to instances, allocated from a block of routable addresses on our internal companies LAN. Currently, Openstack doesn't register instance names with anything other than the DNSMASQ service running on the cloud controller. As such, there's no way to resolve this address through our internal DNS hierarchy (this issue has already been reported as Bug #945505). As such, even though I can bootstrap my JuJu server node, I can't connect to it with the JuJu client, since it can't resolve the local (private) network name. I am able to ssh to the node, once I've assigned it an internally routable (i.e. floating) address. Which leads to the next issue. Next, to install software on an instance running in our cloud, it must have our internal proxy address defined - either in the apt.conf file or via environment variables. Unfortunately, when bootstrapping the server node, there's no provision to pass this info into a instance via JuJu environment.yaml file (if this is even the best way to handle this issue). As a result, the bootstrap node is unable to install the required packages. I'm assuming (dangerous, I know) that the way that I've deployed Openstack in our internal environment is probably not unique. Has anyone else encountered these issues? And more importantly, are work arounds available? Regards, Ross

    Read the article

  • Map /dev/bus/usb node to /sys node on Linux

    - by Cody Brocious
    I'm using libusb to find and access a USB device, but once I get the information I need from there, I need to map it to a /sys node. This could be to the actual USB bus it's on, the /sys/bus/usb-serial node (which is where I'm going to get eventually), or effectively anywhere else since I can walk the tree from there. I can get to a /dev/bus/usb node easily enough, but I'm a bit lost from there. What would be the best route to perform this mapping? Alternatively, a way to get the /dev/ttyUSB device node for a /dev/bus/usb node would work as well, since it gets me the same result.

    Read the article

  • Sending mail via php in EC2

    - by william007
    I have used the following code for sending mail using php using amazon ec2, but I only see 'aatest' as the result, and doesn't get any incoming email. Btw, I have already included ses.php, and have validated the email [email protected], and double confirm that accesskey, and accesskey are the correct one. Can anyone suggest way for debugging it? require_once('ses.php'); $con=new SimpleEmailService('accesskey','accesskey'); print_r('aa'.$con->listVerifiedEmailAddresses()); $m = new SimpleEmailServiceMessage(); $m->addTo('[email protected]'); $m->setFrom('[email protected]'); $m->setSubject('Hello, world!'); $m->setMessageFromString('This is the message body.'); print_r($con->sendEmail($m)); echo 'test';

    Read the article

  • Error: You need to load the kernel first in Grub

    - by Exeleration-G
    I have Lubuntu 11.10 installed on /dev/sda3, and Xubuntu 11.10 on /dev/sda5. A while ago, while being on Lubuntu, I made a mistake somewhere in creating a Live USB: by mistake, I installed a Live USB bootloader into /dev/sda3. This didn't result in any problem at that time. Today, I updated the kernel. I had to restart Lubuntu. In Grub, Lubuntu suddenly didn't appear anymore, and I booted automatically in Xubuntu. I tried to run update-grub and tried to use grub-customizer to get Lubuntu back in Grub, but this didn't work. I ran os-prober, but it doesn't show me Lubuntu. Then, I tried to add a new entry to /etc/grub.d/ on /dev/sda5 called 12_lubuntu. It contained the following: #!/bin/sh -e echo "Lubuntu" cat << EOF menuentry "Lubuntu" { set root=(hd0,3) linux /boot/vmlinuz initrd /boot/initrd.img } EOF After doing that, I ran update-grub and with grub-customizer, I wrote the Grub-configuration to MBR, that is: /dev/sda. Suddenly, Lubuntu appeared in Grub. I tried to launch it, but when doing this, the following message appeared: Error: You need to load the kernel first How can I make Grub start Lubuntu again?

    Read the article

  • Problem with Nobody and PhP

    - by user39190
    We use Cpanel/WHM and have just used EasyApache to upgrade to Apache 2.2 and PhP 5.3.2 As a result of the upgrade process, the user "Nobody" who runs php can no longer access directories or create directories in the various owners web directories. For instance, with Word Press we can no longer upload images. We have tried rebuilding with SuEXEC on and off, SuPHP on and off and nothing makes any difference. It feels as if the user "Nobody" has got its permissions messed up, but that is a wild guess. How can we get back to where we started which is, I assume, that "Nobody" had the rights to the file systems in the web directories?

    Read the article

  • which NoSQL for billions of records [closed]

    - by airtruk
    There are plenty of discussions around NoSQL databases around and a lot of them are about data logging in the social media section. The problem I'm trying to solve falls more into the scientific computing section, where I have several 1000s of billions of pieces of information that I want to query with different a different criteria for each query. All data is at least a 4 dimensional space, which means I have a 3D location (x,y,z) and a time component - plus the value and unit. Say temperature at xyz and 10min in degree Celcius. A typical query result may contain several million results ... I have read about pretty much all NoSQL solutions being exceptionally fast for inserting records, but when it comes to querying them it's a different story. I'm leaning towards MongoDB for the implementation and platform for developing the necessary code since it is more closely related to the current solution using MySQL. Happy to be proven wrong though when it comes to the choice of the NoSQL solution.

    Read the article

  • Printing problem in Silverlight 4.0 RC - loading images in code behind

    - by Jacek Ciereszko
    Few days ago I faced a problem with printing in new Silverlight 4 RC. When you try to dynamically load image (in code behind) and print it, it doesn't work. Paper sheet is blank. Problem XAML file: <Image x:Name="image" Stretch="None" /> XAML.cs: image.Source = new BitmapImage(new Uri(imageUri, UriKind.RelativeOrAbsolute));  Print: var pd = new PrintDocument();   pd.PrintPage += (s, args) =>     {       args.PageVisual = image;     };   pd.Print(); Result: Blank paper.   Solution What you need to do, is forced Silverlight engine to load that image before printing start. To accomplish that I proposed simply checking PixelWith value. Your code will ask about PixelWidth of image so it will have to be loaded. XAML.cs: BitmapImage bImage = new BitmapImage(new Uri(imageUri, UriKind.RelativeOrAbsolute)); image.Source = bImage; InitControl(imageUri, movieUri, isLeft); int w = bImage.PixelWidth; int h = bImage.PixelHeight;   DONE!   Jacek Ciereszko

    Read the article

  • IIS7, different ports for websites but no portnumber in the browser

    - by Queensheep
    I have a windows server 2008 with IIS7 with 4 websites. In DNS I have 4 different URLs which point to the IP of the server. I configured each web site with the site bindings: website1: hostname: url1, port: 80, IP-Adresse: the adress of the server website2: hostname: url2, port: 80, IP-Adresse: the adress of the server The result is, that from the client, I can browse with all the 4 URLs to the specified web sites and everything is fine. Then I changed in IIS the port of the websites, so that website1 now uses port 8080, website2 uses port 8081, and so on. Now I have to use the browser with the url and the portnumber (like URL:8080). Is there a possibility, to configured the websites with different portnumbers but not to use the portnumbers in the browser?

    Read the article

  • Resolution stuck after playing OpenGL game

    - by kit.yang
    I used to start the game,Frozen Throne (using wine) with the option of "-opengl".When I entered the game,the resolution will changed,and restored after exit the game. But this time a problem happened.The resolution can't restore although I restart my computer several times. Both the Ubuntu pane and login windows are exceptional. nvidia-settingsalso detect the resolution is "1024 x 768",But it seemed useless using this tool. Screenshot-NVIDIA X Server Settings: the result of xrandr: Screen 0: minimum 320 x 240, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 50.0* 800x600 51.0 52.0 53.0 680x384 54.0 55.0 640x480 56.0 576x432 57.0 512x384 58.0 400x300 59.0 60.0 61.0 320x240 62.0 the configure of /etc/X11/xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildd@yellow) Fri Apr 9 11:51:21 UTC 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin Identifier "Monitor0" VendorName "Unknown" ModelName "CRT-0" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Entry Graphics" EndSection Section "Screen" # Removed Option "metamodes" "1024x768 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "1024x768_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Why class 'SQLiteDatabase' is not found in /var/www/*.php?

    - by Roman
    I am trying to use sqlite from PHP. I have the following simple code: <?php $db = new SQLiteDatabase("test2.sdb"); unset($db); ?> As the result of this code (which I execute in the command line "php test2.php") I get: Fatal error: Class 'SQLiteDatabase' not found in /var/www/test2.php on line 3 Does anybody know how can I make PHP able to use sqlite? P.S. Here I found out that "SQLite support is enabled by default on a standard Linux PHP compilation starting with PHP 5.0." And I have "PHP Version = 5.2.6-2ubuntu4.6". So, sqlite should be enabled unless the "--disable-sqlite". In my case output of "phpinfo();" does not contain "sqlite" at all.

    Read the article

  • Can I use two of the same type of PCI Sound Cards in one computer?

    - by Eamon
    I recently purchased two Rocketfish 5.1 PCI Sound Cards from Best Buy. These are going to be used for audio production and radio broadcasting, so one card can handle live audio, the other can handle cue audio. After installing both cards, I get strange noises from the broadcasting program, and then the computer locks up solid. I have to hard restart to get it back up. This only happens when the two audio cards are installed on the computer. I have tried switching them around to different PCI slots, with the same result.

    Read the article

  • Getting Started with TypeScript – Classes, Static Types and Interfaces

    - by dwahlin
    I had the opportunity to speak on different JavaScript topics at DevConnections in Las Vegas this fall and heard a lot of interesting comments about JavaScript as I talked with people. The most frequent comment I heard from people was, “I guess it’s time to start learning JavaScript”. Yep – if you don’t already know JavaScript then it’s time to learn it. As HTML5 becomes more and more popular the amount of JavaScript code written will definitely increase. After all, many of the HTML5 features available in browsers have little to do with “tags” and more to do with JavaScript (web workers, web sockets, canvas, local storage, etc.). As the amount of JavaScript code being used in applications increases, it’s more important than ever to structure the code in a way that’s maintainable and easy to debug. While JavaScript patterns can certainly be used (check out my previous posts on the subject or my course on Pluralsight.com), several alternatives have come onto the scene such as CoffeeScript, Dart and TypeScript. In this post I’ll describe some of the features TypeScript offers and the benefits that they can potentially offer enterprise-scale JavaScript applications. It’s important to note that while TypeScript has several great features, it’s definitely not for everyone or every project especially given how new it is. The goal of this post isn’t to convince you to use TypeScript instead of standard JavaScript….I’m a big fan of JavaScript. Instead, I’ll present several TypeScript features and let you make the decision as to whether TypeScript is a good fit for your applications. TypeScript Overview Here’s the official definition of TypeScript from the http://typescriptlang.org site: “TypeScript is a language for application-scale JavaScript development. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source.” TypeScript was created by Anders Hejlsberg (the creator of the C# language) and his team at Microsoft. To sum it up, TypeScript is a new language that can be compiled to JavaScript much like alternatives such as CoffeeScript or Dart. It isn’t a stand-alone language that’s completely separate from JavaScript’s roots though. It’s a superset of JavaScript which means that standard JavaScript code can be placed in a TypeScript file (a file with a .ts extension) and used directly. That’s a very important point/feature of the language since it means you can use existing code and frameworks with TypeScript without having to do major code conversions to make it all work. Once a TypeScript file is saved it can be compiled to JavaScript using TypeScript’s tsc.exe compiler tool or by using a variety of editors/tools. TypeScript offers several key features. First, it provides built-in type support meaning that you define variables and function parameters as being “string”, “number”, “bool”, and more to avoid incorrect types being assigned to variables or passed to functions. Second, TypeScript provides a way to write modular code by directly supporting class and module definitions and it even provides support for custom interfaces that can be used to drive consistency. Finally, TypeScript integrates with several different tools such as Visual Studio, Sublime Text, Emacs, and Vi to provide syntax highlighting, code help, build support, and more depending on the editor. Find out more about editor support at http://www.typescriptlang.org/#Download. TypeScript can also be used with existing JavaScript frameworks such as Node.js, jQuery, and others and even catch type issues and provide enhanced code help. Special “declaration” files that have a d.ts extension are available for Node.js, jQuery, and other libraries out-of-the-box. Visit http://typescript.codeplex.com/SourceControl/changeset/view/fe3bc0bfce1f#samples%2fjquery%2fjquery.d.ts for an example of a jQuery TypeScript declaration file that can be used with tools such as Visual Studio 2012 to provide additional code help and ensure that a string isn’t passed to a parameter that expects a number. Although declaration files certainly aren’t required, TypeScript’s support for declaration files makes it easier to catch issues upfront while working with existing libraries such as jQuery. In the future I expect TypeScript declaration files will be released for different HTML5 APIs such as canvas, local storage, and others as well as some of the more popular JavaScript libraries and frameworks. Getting Started with TypeScript To get started learning TypeScript visit the TypeScript Playground available at http://www.typescriptlang.org. Using the playground editor you can experiment with TypeScript code, get code help as you type, and see the JavaScript that TypeScript generates once it’s compiled. Here’s an example of the TypeScript playground in action:   One of the first things that may stand out to you about the code shown above is that classes can be defined in TypeScript. This makes it easy to group related variables and functions into a container which helps tremendously with re-use and maintainability especially in enterprise-scale JavaScript applications. While you can certainly simulate classes using JavaScript patterns (note that ECMAScript 6 will support classes directly), TypeScript makes it quite easy especially if you come from an object-oriented programming background. An example of the Greeter class shown in the TypeScript Playground is shown next: class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } Looking through the code you’ll notice that static types can be defined on variables and parameters such as greeting: string, that constructors can be defined, and that functions can be defined such as greet(). The ability to define static types is a key feature of TypeScript (and where its name comes from) that can help identify bugs upfront before even running the code. Many types are supported including primitive types like string, number, bool, undefined, and null as well as object literals and more complex types such as HTMLInputElement (for an <input> tag). Custom types can be defined as well. The JavaScript output by compiling the TypeScript Greeter class (using an editor like Visual Studio, Sublime Text, or the tsc.exe compiler) is shown next: var Greeter = (function () { function Greeter(message) { this.greeting = message; } Greeter.prototype.greet = function () { return "Hello, " + this.greeting; }; return Greeter; })(); Notice that the code is using JavaScript prototyping and closures to simulate a Greeter class in JavaScript. The body of the code is wrapped with a self-invoking function to take the variables and functions out of the global JavaScript scope. This is important feature that helps avoid naming collisions between variables and functions. In cases where you’d like to wrap a class in a naming container (similar to a namespace in C# or a package in Java) you can use TypeScript’s module keyword. The following code shows an example of wrapping an AcmeCorp module around the Greeter class. In order to create a new instance of Greeter the module name must now be used. This can help avoid naming collisions that may occur with the Greeter class.   module AcmeCorp { export class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } } var greeter = new AcmeCorp.Greeter("world"); In addition to being able to define custom classes and modules in TypeScript, you can also take advantage of inheritance by using TypeScript’s extends keyword. The following code shows an example of using inheritance to define two report objects:   class Report { name: string; constructor (name: string) { this.name = name; } print() { alert("Report: " + this.name); } } class FinanceReport extends Report { constructor (name: string) { super(name); } print() { alert("Finance Report: " + this.name); } getLineItems() { alert("5 line items"); } } var report = new FinanceReport("Month's Sales"); report.print(); report.getLineItems();   In this example a base Report class is defined that has a variable (name), a constructor that accepts a name parameter of type string, and a function named print(). The FinanceReport class inherits from Report by using TypeScript’s extends keyword. As a result, it automatically has access to the print() function in the base class. In this example the FinanceReport overrides the base class’s print() method and adds its own. The FinanceReport class also forwards the name value it receives in the constructor to the base class using the super() call. TypeScript also supports the creation of custom interfaces when you need to provide consistency across a set of objects. The following code shows an example of an interface named Thing (from the TypeScript samples) and a class named Plane that implements the interface to drive consistency across the app. Notice that the Plane class includes intersect and normal as a result of implementing the interface.   interface Thing { intersect: (ray: Ray) => Intersection; normal: (pos: Vector) => Vector; surface: Surface; } class Plane implements Thing { normal: (pos: Vector) =>Vector; intersect: (ray: Ray) =>Intersection; constructor (norm: Vector, offset: number, public surface: Surface) { this.normal = function (pos: Vector) { return norm; } this.intersect = function (ray: Ray): Intersection { var denom = Vector.dot(norm, ray.dir); if (denom > 0) { return null; } else { var dist = (Vector.dot(norm, ray.start) + offset) / (-denom); return { thing: this, ray: ray, dist: dist }; } } } }   At first glance it doesn’t appear that the surface member is implemented in Plane but it’s actually included automatically due to the public surface: Surface parameter in the constructor. Adding public varName: Type to a constructor automatically adds a typed variable into the class without having to explicitly write the code as with normal and intersect. TypeScript has additional language features but defining static types and creating classes, modules, and interfaces are some of the key features it offers. So is TypeScript right for you and your applications? That’s a not a question that I or anyone else can answer for you. You’ll need to give it a spin to see what you think. In future posts I’ll discuss additional details about TypeScript and how it can be used with enterprise-scale JavaScript applications. In the meantime, I’m in the process of working with John Papa on a new Typescript course for Pluralsight that we hope to have out in December of 2012.

    Read the article

  • Strange Misleading Error[XML -2018/ AC-10006] when doing the R12 Cloning

    - by [email protected]
    During the recent Multi Node to Single Node R12 Clone, Encountered an strange error. When doing the database portion of the clone. Below command 'adclonectx.pl' creates the Context file perl adclonectx.pl contextfile=$ORACLE_HOME/appsutil/SOURCE_CONTEXT_FILE.xml template=$ORACLE_HOME/appsutil/template/adxdbctx.tmp pairsfile=$ORACLE_HOME/appsutil/clone/pairsfile.txt initialnode   When running the same command, It dumped the below error,   file:/tmp/tmpCtxClone.xml<Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected. AC-10006: Exception - org.xml.sax.SAXParseException: file:/tmp/tmpCtxClone.xml<Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected. thrown while creating OAVars object for file: /tmp/tmpCtxClone.xml The new database context file has been created :   /opt/oracle/product/11.1.0_IOFT/appsutil/IOFT_frws35ta.xml   At first site, I suspected that the issue is with format of the source xml file. Hence compared with the working XML file. Result is clean. Below portion of the error struck me Thrown while creating OAVars object for file: /tmp//dummy.xml   Cause : The /tmp is 100% full.   Fix: Either remove the old files in /tmp  directory  OR  export TEMP=/new/location where there is plenty of free space.

    Read the article

  • Replace an IP address with it's whois using bash

    - by user2099762
    I have a traffic log similar to this "page visited" for xxx.xxx.xxx.xxx at 2013-10-30 and I would like to replace the ip address with the result of it's whois lookup. I can export the ip addresses to a separate file and then do a whois on each line, but im struggling to combine them all together. Ideally i'd like to replace the ip address in the same string and print the new string to a new file. So it would look like "page visited" for example.com at 2013-10-30 Can anyone help Here's what I have so far grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' clean_cites.txt > iplist.txt for i in `cat iplist.txt` do OUTPUT=$(geoiplookup -f /usr/share/GeoIP/GeoIPOrg.dat $i) echo $i,$OUTPUT >> visited.txt done Like I said,this produces a separate file with a list of ip addresses and their relevant hostnames, so I either need to search for the ip address in file and and replace it with the text in file b (which will give the ip address and hostname) or replace the ip address in place. Thanks

    Read the article

  • Enabling XML-documentation for code contracts

    - by DigiMortal
    One nice feature that code contracts offer is updating of code documentation. If you are using source code documenting features of Visual Studio then code contracts may automate some tasks you otherwise have to implement manually. In this posting I will show you some XML documentation files with documented contracts. I will also explain how this feature works. Enabling XML-documentation in project settings As a first thing let’s enable generating of code documentation under project settings. Open project properties, move to Build page and make check to checkbox called “XML documentation file”. Save project settings and rebuild project. When project is built go to bin/Debug folder and open the XML-file. Here is my XML. <?xml version="1.0"?> <doc>     <assembly>         <name>Eneta.Examples.CodeContracts.Testable</name>     </assembly>     <members>         <member name="T:Eneta.Examples.CodeContracts.Testable.Randomizer">             <summary>             Class for generating random integers in user specified range.             </summary>         </member>         <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.#ctor(Eneta.Examples.CodeContracts.Testable.IRandomGenerator)">             <summary>             Constructor of Randomizer. Initializes Randomizer class.             </summary>             <param name="generator">Instance of random number generator.</param>         </member>         <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.GetRandomFromRangeContracted(System.Int32,System.Int32)">             <summary>             Returns random integer in given range.             </summary>             <param name="min">Minimum value of random integer.</param>             <param name="max">Maximum value of random integer.</param>         </member>     </members> </doc> You can see nothing about code contracts here. Enabling code contracts documentation Code contracts have their own settings and conditions for documentation. Open project properties and move to Code Contracts tab. From “Contract Reference Assembly” dropdown check Build and make check to checkbox “Emit contracts into XML doc file”. And again – save project setting, build the project and move to bin/Debug folder. Now you can see that there are two files for XML-documentation: <assembly name>.XML <assembly name>.old.XML First files is documentation with contracts, second file is original documentation without contracts. Let’s see now what is inside our new XML-documentation file. <?xml version="1.0"?> <doc>   <assembly>     <name>Eneta.Examples.CodeContracts.Testable</name>   </assembly>   <members>     <member name="T:Eneta.Examples.CodeContracts.Testable.Randomizer">       <summary>             Class for generating random integers in user specified range.             </summary>     </member>     <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.#ctor(Eneta.Examples.CodeContracts.Testable.IRandomGenerator)">       <summary>             Constructor of Randomizer. Initializes Randomizer class.             </summary>       <param name="generator">Instance of random number generator.</param>     </member>     <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.GetRandomFromRangeContracted(System.Int32,System.Int32)">       <summary>             Returns random integer in given range.             </summary>       <param name="min">Minimum value of random integer.</param>       <param name="max">Maximum value of random integer.</param>       <requires description="Min must be less than max" exception="T:System.ArgumentOutOfRangeException">                 min &lt; max</requires>       <exception cref="T:System.ArgumentOutOfRangeException">                 min &gt;= max</exception>       <ensures description="Return value is out of range">                 Contract.Result&lt;int&gt;() &gt;= min &amp;&amp;                 Contract.Result&lt;int&gt;() &lt;= max</ensures>     </member>   </members> </doc> As you can see then code contracts are pretty well documented. Messages that I provided with code contracts are also available in documentation. If I wrote very good and informative messages then these messages are very useful also in contracts documentation. Code contracts and Sandcastle Sandcastle knows nothing about code contracts by default. There is separate package of file for Sandcastle that is provided you by code contracts installation. You can read from code contracts manual: “Sandcastle (http://www.codeplex.com/Sandcastle) is a freely available tool that generates help les and web sites describing your APIs, based on the XML doc comments in your source code. The CodeContracts install contains a set of les that can be copied over a Sandcastle installation to take advantage of the additional contract information. The produced documentation adds a contract section to methods with declared requires and/or ensures. In order for Sandcastle to produce Contract sections, you need to patch a number of files in its installation. Please refer to the Sandcastle Readme.txt found under Start Menu/CodeContracts/Sandcastle for instructions. A future release of Sandcastle will hopefully support contract sections without the need for this patching step.” Integrating code contracts documentation to Sandcastle will be one of my next postings about code contracts. Conclusion if you are using code documentation then documentation about code contracts can be added to documentation very easily. All you have to do is to enable XML-documentation for contracts and build your project. Later you can use Sandcastle files provided by code contracts installer to integrate contracts documentation to your output documentation package.

    Read the article

  • Input handling between game loops

    - by user48023
    This may be obvious and trivial for you but as I am a newbie in programming I come with a specific question. I have three loops in my game engine which are input-loop, update-loop and render-loop. Update-loop is set to 10 ticks per second with a fixed timestep, render-loop is capped at around 60 fps and the input-loop runs as fast as possible. I am using one of the Javascript frameworks which provide such things but it doesn't really matter. Let's say I am rendering a tile map and the view of which elements are rendered depends on camera-like movement variables which are modified during key pressing. This is only about camera/viewport and rendering, no game physics involved here. And now, how can I handle input events among these loops to keep consistent engine reaction? Am I supposed to read the current variable modified with input and do some needed calculations in a update-loop and share the result so it could be interpolated in a render-loop? Or read the input effect directly inside the render-loop and put needed calculations inside? I thought interpreting user input inside an update-loop with a low tick rate would be inaccurate and kind of unresponsive while rendering with interpolation in the final view. How it is done properly in games overall?

    Read the article

  • Acer Aspire 3680 Wireless Signal Problem

    - by SHiNKiROU
    I am using Acer Aspire 3680. I recently reinstalled Windows Vista, and the wireless did not work by default. I installed the "Atheros Wireless LAN Driver" on this site: http://support.acer-euro.com/drivers/notebook/as_3680.html The result, the wireless APs are successfully scanned. However, when I tried to join a network, even with 5/5 signal, an error message showed up diagnosing signal too little. I restarted as Ubuntu, and wireless worked perfectly, with same computer position. Even more confusing, When I moved to the kitchen (2 meters close to the router), the wireless connected, when I move back to my room, wireless failed on Vista and worked on Ubuntu. It is clearly the driver problem. Acer laptops seems come with pre-installed drivers, but after re-installation, they are all gone. (I can't find the original Acer backup disc) Please do not answer "move closer to the AP" and something related to "interference", as wireless worked with Ubuntu's default driver.

    Read the article

  • Thank You MySQL Community! MySQL 5.6.9 Release Candidate Available Now!

    - by Rob Young
    The MySQL Community continues its good work in testing and refining MySQL 5.6, and as such the next iteration of the 5.6 Release Candidate is now available for download.  You can get MySQL 5.6.9 here (look under the "Development Releases" tab).  This version is the result of feedback we have gotten since MySQL 5.6.7 was announced at MySQL Connect in late September. As iron sharpens iron, Community feedback sharpens the quality and performance of MySQL so please download 5.6.9 and let us know how we can improve it as we move toward the production-ready product release in early 2013. MySQL 5.6 is designed to meet the agility demands of the next generation of web apps and services and includes across the board improvements to the Optimizer, InnoDB performance/scale and online DDL operations, self-healing Replication, Performance Schema Instrumentation, Security and developer enabling NoSQL functionality.  You can learn all the details and follow MySQL Engineering blogs on all of the key features in this MySQL DevZone article. On a related note, plan to join this week's live webinars to learn more about MySQL 5.6 Self-Healing Replication Clusters and Building the Next Generation of Web, Cloud, SaaS, Embedded Application and Services with MySQL 5.6.  Hurry!  Seating is limited!  As always, thanks for your continued support of MySQL!

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • How to update grub with puppet?

    - by Tombart
    I would like to change a line in /etc/default/grub with puppet to this: GRUB_CMDLINE_LINUX="cgroup_enable=memory" I've tried to used augeas which seems to do this magic: exec { "update_grub": command => "update-grub", refreshonly => true, } augeas { "grub-serial": context => "/files/etc/default/grub", changes => [ "set /files/etc/default/grub/GRUB_CMDLINE_LINUX[last()] cgroup_enable=memory", ], notify => Exec['update_grub'], } It seems to work, but the result string is not in quotes and also I want to make sure that any other values will be separated by space. GRUB_CMDLINE_LINUX=cgroup_enable=memory Is there some mechanism how to append values and escape the whole thing? GRUB_CMDLINE_LINUX="quiet splash cgroup_enable=memory"

    Read the article

  • Running CRON job on Ubuntu server for SugarCRM

    - by Logik
    I am pretty inexperienced in Linux, so be descriptive on your answer. My environment: Local Linux server 12.04 hosting Sugar CRM 6.5.2. There is area in sugar CRM called scheduler. I can configured some predefined jobs here. in my case i am trying to run email reminders (ever min/hour/day/month). For this scheduler to be effective, i read some where i need to setup CRON job. So I did some research & finally put following lines in CRONTAB for the root user, as per instructions given in sugarCRM. * * * * * cd /var/www/crm; php -f cron.php > /dev/null 2>&1 Well I am creating contracts in my sugarCRM (AOS module) & I want email reminders to be sent for these contracts to the concern person. Now my sugarCRM email is configured correctly & I can send test emails using it. But the CRON + scheduler not giving any result. I can't receive any emails. Then I tried to read /var/log/syslog & it is showing entry for following line each minute. Oct 27 15:03:01 unicomm CRON[28182]: (root) CMD (cd /var/www/crm; php -f cron.php /dev/null 2&1) I've few questions: what does the CRON job line i've added in crontab mean? cd /var/www/crm; php -f cron.php > /dev/null 2>&1 is not making any sense to me. How am i suppose to get this thing work? I've searched a lot (including SugarCRM forum), but no luck.

    Read the article

  • Nexus One USB freezes Vista SP2 Quad Core

    - by user25479
    My Nexus One (N1) will occasionally freeze my Vista 32bit SP2 Quad Core 2.4GHz during POST. MB is ASUS P5N-E SLI with 4GB of RAM. The PC will be booting and when I connect the N1 via USB, the boot sequence will freeze, then continue once I unplug the N1 USB link. It happens whether the N1 is in USB debugging mode and when it is not. I'm not sure whether this is an N1 hw/fw issue, system interaction with my PC, or a result of my N1 development environment (I'm using the Eclipse Galileo IDE for Java Developers, primarily compiling to API Level 7. Eclipse has also occasionally frozen although I haven't established N1 USB cause-and-effect on that issue). Is anyone else experiencing these symptom?

    Read the article

  • Error with TextMate 2 --shell-escape and gnuplot 10.8.2

    - by Manuel
    I have had TM 1.X since a long time ago, but a week ago I updated my system to Mountain Lion 10.8.2 and installed TM2. The problem is that I write with LaTeX, and sometimes I use gnuplot for the graphs (I installed gnuplot with macports). But now it doesn't work because the --shell-escape doesn't work, this is the error message I get: Package pgfplots Error: Sorry, the gnuplot-result file '"untitled 2.pgf-plot.table"' could not be found. Maybe you need to enable the shell-escape feature? For pdflatex, this is ' pdflatex -shell-escape'. You can also invoke ' gnuplot .gnuplot' manually on the respective gnuplot file.. And then, looking around I discovered that it's not just gnuplot but everything which needs --shell-escape. Question What happened? How can I get TM have the correct rights so this works? It worked right in Snow Leopard with TM 1.5.

    Read the article

  • Why is my FTP output file blank?

    - by Nathan Long
    From the Windows command prompt, I have FTPd to a Windows web server. I can get a file, and I can see a directory listing with dir, but I want to save that list locally. I tried dir > c:\somefile.txt, and the file is created, but it's blank. Same thing if I do ls > c:\somefile.txt. The result is the same when I FTP from a Linux box. FTP sends back the following: 200 PORT command successful 150 Opening ASCII mode data connection for /bin/ls 226 Transfer complete

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >