Search Results

Search found 69002 results on 2761 pages for 'file rename'.

Page 1627/2761 | < Previous Page | 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634  | Next Page >

  • Using Rich Text Editor (WYSIWYG) in ASP.NET MVC

    - by imran_ku07
       Introduction:          In ASP.NET MVC forum I found some question regarding a sample HTML Rich Text Box Editor(also known as wysiwyg).So i decided to create a sample ASP.NET MVC web application which will use a Rich Text Box Editor. There are are lot of Html Editors are available, but for creating a sample application, i decided to use cross-browser WYSIWYG editor from openwebware. In this article I will discuss what changes needed to work this editor with ASP.NET MVC. Also I had attached the sample application for download at http://www.speedfile.org/155076. Also note that I will only show the important features, not discuss every feature in detail.   Description:          So Let's start create a sample ASP.NET MVC application. You need to add the following script files,         jquery-1.3.2.min.js        jquery_form.js        wysiwyg.js        wysiwyg-settings.js        wysiwyg-popup.js          Just put these files inside Scripts folder. Also put wysiwyg.css in your Content Folder and add the following folders in your project        addons        popups          Also create a empty folder Uploads to store the uploaded images. Next open wysiwyg.js and set your configuration                  // Images Directory        this.ImagesDir = "/addons/imagelibrary/images/";                // Popups Directory        this.PopupsDir = "/popups/";                // CSS Directory File        this.CSSFile = "/Content/wysiwyg.css";              Next create a simple View TextEditor.aspx inside View / Home Folder and add the folllowing HTML.        <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage" %>            <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">        <html >            <head runat="server">                <title>TextEditor</title>                <script src="../../Scripts/wysiwyg.js" type="text/javascript"></script>                <script src="../../Scripts/wysiwyg-settings.js" type="text/javascript"></script>                <script type="text/javascript">                            WYSIWYG.attach('text', full);                            </script>            </head>            <body>                <% using (Html.BeginForm()){ %>                    <textarea id="text" name="test2" style="width:850px;height:200px;">                    </textarea>                    <input type="submit" value="submit" />                <%} %>            </body>        </html>                  Here i have just added a text area control and a submit button inside a form. Note the id of text area and WYSIWYG.attach function's first parameter is same and next to watch is the HomeController.cs        using System;        using System.Collections.Generic;        using System.Linq;        using System.Web;        using System.Web.Mvc;        using System.IO;        namespace HtmlTextEditor.Controllers        {            [HandleError]            public class HomeController : Controller            {                public ActionResult Index()                {                    ViewData["Message"] = "Welcome to ASP.NET MVC!";                    return View();                }                    public ActionResult About()                {                                return View();                }                        public ActionResult TextEditor()                {                    return View();                }                [AcceptVerbs(HttpVerbs.Post)]                [ValidateInput(false)]                public ActionResult TextEditor(string test2)                {                    Session["html"] = test2;                            return RedirectToAction("Index");                }                        public ActionResult UploadImage()                {                    if (Request.Files[0].FileName != "")                    {                        Request.Files[0].SaveAs(Server.MapPath("~/Uploads/" + Path.GetFileName(Request.Files[0].FileName)));                        return Content(Url.Content("~/Uploads/" + Path.GetFileName(Request.Files[0].FileName)));                    }                    return Content("a");                }            }        }          So simple code, just save the posted Html into Session. Here the parameter of TextArea action is test2 which is same as textarea control name of TextArea.aspx View. Also note ValidateInputAttribute is false, so it's up to you to defends against XSS. Also there is an Action method which simply saves the file inside Upload Folder.          I am uploading the file using Jquery Form Plugin. Here is the code which is found in insert_image.html inside addons folder,        function ChangeImage() {            var myform=document.getElementById("formUpload");                    $(myform).ajaxSubmit({success: function(responseText){                insertImage(responseText);                        window.close();                }            });        }          and here is the Index View which simply renders the html of Editor which was saved in Session        <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %>        <asp:Content ID="indexTitle" ContentPlaceHolderID="TitleContent" runat="server">            Home Page        </asp:Content>        <asp:Content ID="indexContent" ContentPlaceHolderID="MainContent" runat="server">            <h2><%= Html.Encode(ViewData["Message"]) %></h2>            <p>                To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>.            </p>            <%if (Session["html"] != null){                  Response.Write(Session["html"].ToString());            } %>                    </asp:Content>   Summary:          Hopefully you will enjoy this article. Just download the code and see the effect. From security point, you must handle the XSS attack your self. I had uploaded the sample application in http://www.speedfile.org/155076

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Package operation failed when updating 12.04

    - by Gausstein
    Whenever I try to update 12.04, I get the "Package Operation Failed" message. I haven't been able to update my system. In the "Details" section it says: installArchives() failed: Extracting templates from packages: 30%% Extracting templates from packages: 61%% Extracting templates from packages: 91%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 30%% Extracting templates from packages: 61%% Extracting templates from packages: 91%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 30%% Extracting templates from packages: 61%% Extracting templates from packages: 91%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 30%% Extracting templates from packages: 61%% Extracting templates from packages: 91%% Extracting templates from packages: 100%% Preconfiguring packages ... dpkg: error: parsing file '/var/lib/dpkg/available' near line 42150 package 'x11-apps': blank line in value of field 'Description' Error in function:

    Read the article

  • Can't adjust screen brightness on Macbook Pro 10,1 Ubuntu 13.10

    - by ben101
    I recently installed Ubuntu on my retina Macbook Pro (following this great guide: http://cberner.com/2013/03/01/installing-ubuntu-13-04-on-macbook-pro-retina/) Everything works fine so far however the screen brightness / backlight cannot be adjusted neither by using the assigned key nor by any other means. I know, I'm not the first to address this problem, but all the suggested solutions I found so far did not work for me. I unsuccessfully tried the following: Including Option "RegistryDwords" "EnableBrightnessControl=1" in the Devices-Section of /etc/X11/xorg.conf the application xbacklight I also haven't found any file such as mbp_backlight or apple_backlight on my system which probably would be a starting point I'm using the Nvidia-driver. (BTW: With the nouveau-driver, the keys to adjust the brightness work. However, with the nouveau driver Ubuntu does not resume from suspend mode) Any suggestions what I can do? Thanks!

    Read the article

  • Which LAN card / module combinations proven to work with Wake on LAN

    - by pablomo
    I've got a 12.04 headless server that I've been trying to get to work with wake-on-lan. The card is Marvel 88E8053 using the sky2 module. Although WOL is enabled in BIOS and ethtool shows the card as WOL enabled, it refuses to wake when I send the magic packet. I have verified that the packet is being received OK when the machine is on. The machine does wake OK from a BIOS alarm which suggests it is a network card issue. I've seen reference to bugs in sky2 that mean WOL fails in recent versions of Ubuntu (and have tried a module conf file as suggested here but to no avail) So I am thinking the best bet is to replace the ethernet card with one that definitely works with WOL in 12.04 - please could you post your card make and model no if you are using it successfully?

    Read the article

  • How google handle site traffic in google analytics

    - by Hamidreza
    I have a site with address www.exam.com and I have put Google analytics javascript scripts in it. I have made an app for my site, I want that everytime a user uses app, he visit the site in the application with built in browser which is inside the application ( I am using C# for application and .NET web browser ). User will address www.example.com/appvisit in the app and I just have put google analytics scripts in that page and nothing else. And I want to disallow this address /appvisit in my robots.txt file . I want to know that Is there any problem with doing this? will google crawl in the /appvisit directory ? Does google hate this work? and will google think this traffic is true and normal? thanks

    Read the article

  • Load SpriteFont in XNA

    - by user22715
    I'm planning on my game using multiple backgrounds so I'm trying to use spritefonts to draw the text. Every time I load my spritefont I get a error. Line1 = content.Load<SpriteFont>("Courier New"); Line2 = content.Load<SpriteFont>("Courier New"); This is the error I get. Error loading "Courier New". File not found. Although this was the font listed on the official microsoft website http://msdn.microsoft.com/en-us/library/bb447673.aspx.

    Read the article

  • How can I configure the Aiptek T-6000U graphics tablet?

    - by mejpark
    I followed AiptekTablet instructions on the Ubuntu Wiki to configure 11.04 for use with my graphics tablet. I installed the xserver-xorg-input-aiptek package and created two files with the options detailed on the Wiki page above: $ cat /lib/udev/rules.d/69-xserver-xorg-input-aiptek.rules ACTION!="add|change", GOTO="xorg_aiptek_end" KERNEL!="event[0-9]*", GOTO="xorg_aiptek_end" ATTRS{idVendor}=="08ca", ENV{x11_driver}="aiptek", SYMLINK+="input/aiptektablet" LABEL="xorg_aiptek_end" $ cat /usr/share/X11/xorg.conf.d/50-aiptek.conf Section "InputClass" Identifier "pen" MatchProduct "Aiptek|AIPTEK|aiptek" MatchDevicePath "/dev/input/event*" Driver "aiptek" Option "USB" "on" Option "Type" "stylus" Option "Mode" "absolute" Option "zMin" "0" Option "zMax" "511" EndSection The 50-aiptek.conf file was originally called 10-aiptek.conf as in the Wiki, but an Aiptek tablet installation help thread on the Ubuntu Forums, suggested changing 10 to 50. Any ideas? Thank you.

    Read the article

  • The Beginner’s Guide To Tabbed Browsing

    - by Chris Hoffman
    Tabs allow you to open multiple web pages in a single browser window without cluttering your desktop. Mastering tabbed browsing can speed up your browsing experience and make multiple web pages easier to manage. Tabbed browsing was once the domain of geeks using alternative browsers, but every popular browser now supports tabbed browsing – even mobile browsers on smartphones and tablets. This article is intended for beginners. If you know someone that doesn’t fully understand tabbed browsing and how awesome it is, feel free to send it to them! How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • Best practices for team workflow with RoR/Github for designer + coder?

    - by Josh
    My friend and I have started to try to collaborate on some projects. For background, I come from a PHP/Wordpress/Drupal coding background, but recently I've become more experienced with the RoR framework, while he is more experienced as an HTML/CSS designer, working with PHP and WordPress. We're both relatively new to RoR I think, and so we're trying to figure out our collaborative workflow, but we have no idea where to start. For instance, we were trying to figure out how he could do some minor edits to the CSS file without having to do a full RoR deploy on his box. We still haven't figured out a solution, so I think it's best if we start to set some sort of workflow based on best practices. I was wondering if you guys have any insight or links to articles/case studies regarding this topic?

    Read the article

  • Integrate Microsoft Translator into your ASP.Net application

    - by sreejukg
    In this article I am going to explain how easily you can integrate the Microsoft translator API to your ASP.Net application. Why we need a translation API? Once you published a website, you are opening a channel to the global audience. So making the web content available only in one language doesn’t cover all your audience. Especially when you are offering products/services it is important to provide contents in multiple languages. Users will be more comfortable when they see the content in their native language. How to achieve this, hiring translators and translate the content to all your user’s languages will cost you lot of money, and it is not a one time job, you need to translate the contents on the go. What is the alternative, we need to look for machine translation. Thankfully there are some translator engines available that gives you API level access, so that automatically you can translate the content and display to the user. Microsoft Translator API is an excellent set of web service APIs that allows developers to use the machine translation technology in their own applications. The Microsoft Translator API is offered through Windows Azure market place. In order to access the data services published in Windows Azure market place, you need to have an account. The registration process is simple, and it is common for all the services offered through the market place. Last year I had written an article about Bing Search API, where I covered the registration process. You can refer the article here. http://weblogs.asp.net/sreejukg/archive/2012/07/04/integrate-bing-search-api-to-asp-net-application.aspx Once you registered with Windows market place, you will get your APP ID. Now you can visit the Microsoft Translator page and click on the sign up button. http://datamarket.azure.com/dataset/bing/microsofttranslator As you can see, there are several options available for you to subscribe. There is a free version available, great. Click on the sign up button under the package that suits you. Clicking on the sign up button will bring the sign up form, where you need to agree on the terms and conditions and go ahead. You need to have a windows live account in order to sign up for any service available in Windows Azure market place. Once you signed up successfully, you will receive the thank you page. You can download the C# class library from here so that the integration can be made without writing much code. The C# file name is TranslatorContainer.cs. At any point of time, you can visit https://datamarket.azure.com/account/datasets to see the applications you are subscribed to. Click on the Use link next to each service will give you the details of the application. You need to not the primary account key and URL of the service to use in your application. Now let us start our ASP.Net project. I have created an empty ASP.Net web application using Visual Studio 2010 and named it Translator Sample, any name could work. By default, the web application in solution explorer looks as follows. Now right click the project and select Add -> Existing Item and then browse to the TranslatorContainer.cs. Now let us create a page where user enter some data and perform the translation. I have added a new web form to the project with name Translate.aspx. I have placed one textbox control for user to type the text to translate, the dropdown list to select the target language, a label to display the translated text and a button to perform the translation. For the dropdown list I have selected some languages supported by Microsoft translator. You can get all the supported languages with their codes from the below link. http://msdn.microsoft.com/en-us/library/hh456380.aspx The form looks as below in the design surface of Visual Studio. All the class libraries in the windows market place requires reference to System.Data.Services.Client, let us add the reference. You can find the documentation of how to use the downloaded class library from the below link. http://msdn.microsoft.com/en-us/library/gg312154.aspx Let us evaluate the translatorContainer.cs file. You can refer the code and it is self-explanatory. Note the namespace name used (Microsoft), you need to add the namespace reference to your page. I have added the following event for the translate button. The code is self-explanatory. You are creating an object of TranslatorContainer class by passing the translation service URL. Now you need to set credentials for your Translator container object, which will be your account key. The TranslatorContainer support a method that accept a text input, source language and destination language and returns DataServiceQuery<Translation>. Let us see this working, I just ran the application and entered Good Morning in the Textbox. Selected target language and see the output as follows. It is easy to build great translator applications using Microsoft translator API, and there is a reasonable amount of translation you can perform in your application for free. For enterprises, you can subscribe to the appropriate package and make your application multi-lingual.

    Read the article

  • Have Fun with Mozilla Firefox’s Spark and Ember Paper Toys [Geek Project]

    - by Asian Angel
    Are you looking for a fun paper-craft project to work on while showing your support for Mozilla Firefox? Then you will enjoy the Spark and Ember paper toys that the folks from the Mozilla Indonesia community have put together. Each comes in an easy to print three page PDF file for easy storage and sharing with friends. Photo courtesy of Mozilla Blog Indonesia. Download the Spark Paper Toy Download the Ember Paper Toy [via Mozilla Blog Indonesia] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • How can I re-open the TrueCrypt window after it's been closed?

    - by user27451
    I installed TrueCrypt 7.1 Standard 64-bit on a fresh install of Ubuntu 11.10 64-bit. After finding the application in the dash I dragged it's icon onto the Unity launcher. I then clicked that icon and TrueCrypt's main window opened. I mounted my encrypted file/volume and then closed the window to do some work. To re-open the TrueCrypt window I would normally click the small blue TrueCrypt icon that appears on the top panel. In Ubuntu 11.10 that icon is no longer there. I receive a message ("TrueCrypt is already running.") if I click on the TrueCrypt icon in the launcher. How can I re-open the TrueCrypt window after it's been closed in Ubuntu 11.10?

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • MVC : Does Code to save data in cache or session belongs in controller?

    - by newbie
    I'm a bit confused if saving the information to session code below, belongs in the controller action as shown below or should it be part of my Model? I would add that I have other controller methods that will read this session value later. public ActionResult AddFriend(FriendsContext viewModel) { if (!ModelState.IsValid) { return View(viewModel); } // Start - Confused if the code block below belongs in Controller? Friend friend = new Friend(); friend.FirstName = viewModel.FirstName; friend.LastName = viewModel.LastName; friend.Email = viewModel.UserEmail; httpContext.Session["latest-friend"] = friend; // End Confusion return RedirectToAction("Home"); } I thought about adding a static utility class in my Model which does something like below, but it just seems stupid to add 2 lines of code in another file. public static void SaveLatestFriend(Friend friend, HttpContextBase httpContext) { httpContext.Session["latest-friend"] = friend; } public static Friend GetLatestFriend(HttpContextBase httpContext) { return httpContext.Session["latest-friend"] as Friend; }

    Read the article

  • A tale of two useful utilities

    - by TATWORTH
    This time I want to introduce you to two utilities that both have a tail! The first is the BeaverTail ADSI browser at http://adsi.mvps.org/adsi/CSharp/beavertail.html. This is a useful utility for doing active directory queries. This is free for both personal and commercial use. The souece code is also available. The second is a windows equivalent to the unit tail command to allow easy reading of flat file logs. This is free for personal use but must be registered for commercial use. Download it from http://www.uvviewsoft.com/logviewer/

    Read the article

  • Does "diff" exist for images?

    - by moose
    You can compare two text files very easy with diff and even better with meld: If you use diff for images, you get an example like this: $ diff zivi-besch.tif zivildienst.tif Binary files zivi-besch.tif and zivildienst.tif differ Here is an example: Original from http://commons.wikimedia.org/wiki/File:Tux.svg Edited: I've added a white background to both images and applied GIMPs "Difference" filter to get this: It is a very simple method how a diff could work, but I can imagine much better (and more complicated) ones. Do you know a program which works for images like meld does for texts? (If a program existed that could give a percentage (0% the same image - 100% the same image) I would also be interested in it, but I am looking for one that gives me visual hints where differences are.)

    Read the article

  • System reverts to 87Hz refresh rate at every startup after I have installed nvidia drivers

    - by Mohammad Kamil Nadeem
    Every time the system starts the screen's refresh rate reverts to 87Hz which results in a pixelated and flickery screen which I have to manually correct every time by either selecting 60Hz as my refresh rate. I have tried "save to X configuration files" and even tried by making the changes as Root but to no avail as it again reverts to 87Hz on every system startup The Open Source Drivers are Okay for regular Unity but many games don't work on it hence I had to install the nvidia drivers. I have been facing this since the Beta Phase although this is on a fresh installation of 12.04 final release. I am also providing my Xorg.conf file just in case it might help http://paste.ubuntu.com/952196/ Also for some reason Displays shows my CRT monitor as Laptop but on open source drivers it was mentioning it as a 14" CRT only This bug is also present on Edubuntu 12.04 This is not present on Xubuntu 12.04 I had selected to install updates and 3rd party software on the install and was greeted with a correct refresh rate screen on the Boot Up. I like Xubuntu.

    Read the article

  • Decompilers - Myth or Fact ?

    - by Simon
    Lately I have been thinking of application security and binaries and decompilers. (FYI- Decompilers is just an anti-complier, the purpose is to get the source back from the binary) Is there such thing as "Perfect Decompiler"? or are binaries safe from reverse engineering? (For clarity sake, by "Perfect" I mean the original source files with all the variable names/macros/functions/classes/if possible comments in the respective headers and source files used to get the binary) What are some of the best practices used to prevent reverse engineering of software? Is it a major concern? Also is obfuscation/file permissions the only way to prevent unauthorized hacks on scripts? (call me a script-junky if you should)

    Read the article

  • ODI 11g - Oracle Data Integrator 11g – A Hands-On Tutorial

    - by David Allan
    I've have been asked by Packt publishing to review a brand new book on Oracle Data Integrator: Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial. Waiting on this book to arrive and see what goodies are inside, I'll blog a review later. The book can be found at Oracle Data Integrator 11g – A Hands-On Tutorial Looking at the table of contents, it looks like it gives a good broad introduction (including various data formats) to the product; Chapter 1: Product Overview Chapter 2: Product Installation Chapter 3: Using Variables Chapter 4: ODI Sources, Targets, and Knowledge Modules Chapter 5: Working with Databases Chapter 6: Working with MySQL Chapter 7: Working with Microsoft SQL Server Chapter 8: Integrating File Data Chapter 9: Working with XML Files Chapter 10: Creating Workflows—Packages and Load Plans Chapter 11: Error Management Chapter 12: Managing and Monitoring ODI Components Chapter 13: Concluding Remarks Looking forward to it.

    Read the article

  • My laptop can not access internet after setup bridge with wired NIC and wireless NIC

    - by Victor S
    I setup a bridge using wired NIC and wireless NIC in order to make wireless NIC as a Wi-Fi AP to share wired internet access, after setup successfully, the Wi-Fi AP works as my expectation, but my laptop can not access internet, please give me a hand. Thanks. My hostapd.conf $ cat hostapd.conf interface=wlan0 bridge=br0 driver=nl80211 ssid=myAP hw_mode=g channel=11 dtim_period=1 rts_threshold=2347 fragm_threshold=2346 macaddr_acl=0 auth_algs=3 ieee80211n=0 wpa=3 wpa_passphrase=PassWord wpa_key_mgmt=WPA-PSK wpa_pairwise=TKIP rsn_pairwise=CCMP Setup steps: $ sudo killall hostapd hostapd: no process found $ sudo hostapd -B hostapd.conf Configuration file: hostapd.conf Using interface wlan0 with hwaddr 00:26:5e:e8:4f:8e and ssid 'myAP' $ sudo brctl addbr br0 device br0 already exists; can't create bridge with the same name $ sudo ifconfig eth0 0.0.0.0 up $ sudo ifconfig wlan0 0.0.0.0 up $ sudo brctl addif br0 eth0 $ sudo brctl addif br0 wlan0 device wlan0 is already a member of a bridge; can't enslave it to bridge br0. $ sudo ifconfig br0 192.168.1.110 netmask 255.255.255.0 $ sudo route add default gw 192.168.1.1

    Read the article

  • I made a 2D ENGINE for Android, looking for cooperation.

    - by Roger Travis
    My name is Robert, I am an Android programmer and wanted to show off my latest project - a 2d game engine. You can see it in action here - https://play.google.com/store/apps/details?id=engineDemo.com My engine's main advantage is its ease of use. To have your level up and running, you'll need only 3 lines of code. ABoxView aboxView = new ABoxView(this); setContentView(aboxView); aboxView.loadLevel("level/level02"); Level are created in a special level constructor and object physical properties are stored in a corresponding XML file. I am looking to cooperate with those, who might be interesting in using my engine in their games. You can email me at [email protected] or post here. Thanks, Robert

    Read the article

  • Big Data Accelerator

    - by Jean-Pierre Dijcks
    For everyone who does not regularly listen to earnings calls, Oracle's Q4 call was interesting (as it mostly is). One of the announcements in the call was the Big Data Accelerator from Oracle (Seeking Alpha link here - slightly tweaked for correctness shown below):  "The big data accelerator includes some of the standard open source software, HDFS, the file system and a number of other pieces, but also some Oracle components that we think can dramatically speed up the entire map-reduce process. And will be particularly attractive to Java programmers [...]. There are some interesting applications they do, ETL is one. Log processing is another. We're going to have a lot of those features, functions and pre-built applications in our big data accelerator."  Not much else we can say right now, more on this (and Big Data in general) at Openworld!

    Read the article

  • Absent Code attribute in method that is not native or abstract

    - by kerry
    I got the following, quite puzzling error today when running a unit test: java.lang.ClassFormatError: Absent Code attribute in method that is not native or abstract in class file javax/servlet/http/Cookie A google search found this post, which explains that it is caused by having an interface in the classpath, and not an actual implementation. In this case it’s the java-ee interface. To fix this I added the jetty servlet api implementation to my pom: jetty javax.servlet test Piece of cake. I have run in to this before, so I figured I would capture the fix here in case I run in to it again.

    Read the article

  • Copyright notices/disclaimers in source files

    - by mojuba
    It's a common practice to place copyright notices, various legal disclaimers and sometimes even full license agreements in each source file of an open-source project. Is this really necessary for a (1) open-source project and (2) closed-source project? What are you trying to achieve or prevent by putting these notices in source files? I understand it's a legal question and I doubt we can get a fully competent answer here at programmers.SO (it's for programmers, isn't it?) What would also be interesting to hear is, when you put legal stuff in your source files, is it because "everyone does it" or you got legal advice? What was the reasoning?

    Read the article

< Previous Page | 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634  | Next Page >