Search Results

Search found 22515 results on 901 pages for 'created'.

Page 202/901 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • How to build Open JavaFX for Android.

    - by PictureCo
    Here's a short recipe for baking JavaFX for Android dalvik. We will need just a few ingredients but each one requires special care. So let's get down to the business.  SourcesThe first ingredient is an open JavaFX repository. This should be piece of cake. As always there's a catch. You probably know that dalvik is jdk6 compatible  and also that certain APIs are missing comparing to good old java vm from Oracle.  Fortunately there is a repository which is a backport of regular OpenJFX to jdk7 and going from jdk7 to jdk6 is possible. The first thing to do is to clone or download the repository from https://bitbucket.org/narya/jfx78. Main page of the project says "It works in some cases" so we will presume that it will work in most cases As I've said dalvik vm misses some APIs which would lead to a build failures. To get them use another compatibility repository which is available on GitHub https://github.com/robovm/robovm-jfx78-compat. Download the zip and unzip sources into jfx78/modules/base.We need also a javafx binary stubs. Use jfxrt.jar from jdk8.The last thing to download are freetype sources from http://freetype.org. These will be necessary for native font rendering. Toolchain setup I have to point out that these instructions were tested only on linux. I suppose they will work with minimal changes also on Mac OS. I also presume that you were able to build open JavaFX. That means all tools like ant, gradle, gcc and jdk8 have been installed and are working all right. In addition to this you will need to download and install jdk7, Android SDK and Android NDK for native code compilation.  Installing all of them will take some time. Don't forget to put them in your path. export ANDROID_SDK=/opt/android-sdk-linux export ANDROID_NDK=/opt/android-ndk-r9b export JAVA_HOME=/opt/jdk1.7.0 export PATH=$JAVA_HOME/bin:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools:$ANDROID_NDK FreetypeUnzip freetype release sources first. We will have to cross compile them for arm. Firstly we will create a standalone toolchain for cross compiling installed in ~/work/ndk-standalone-19. $ANDROID_NDK/build/tools/make-standalone-toolchain.sh  --platform=android-19 --install-dir=~/work/ndk-standalone-19 After the standalone toolchain has been created cross compile freetype with following script: export TOOLCHAIN=~/work/freetype/ndk-standalone-19 export PATH=$TOOLCHAIN/bin:$PATH export FREETYPE=`pwd` ./configure --host=arm-linux-androideabi --prefix=$FREETYPE/install --without-png --without-zlib --enable-shared sed -i 's/\-version\-info \$(version_info)/-avoid-version/' builds/unix/unix-cc.mk make make install It will compile and install freetype library into $FREETYPE/install. We will link to this install dir later on. It would be possible also to link openjfx font support dynamically against skia library available on Android which already contains freetype. It creates smaller result but can have compatibility problems. Patching Download patches javafx-android-compat.patch + android-tools.patch and patch jfx78 repository. I recommend to have look at patches. First one android-compat.patch updates openjfx build script, removes dependency on SharedSecret classes and updates LensLogger to remove dependency on jdk specific PlatformLogger. Second one android-tools.patch creates helper script in android-tools. The script helps to setup javaFX Android projects. Building Now is time to try the build. Run following script: JAVA_HOME=/opt/jdk1.7.0 JDK_HOME=/opt/jdk1.7.0 ANDROID_SDK=/opt/android-sdk-linux ANDROID_NDK=/opt/android-ndk-r9b PATH=$JAVA_HOME/bin:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools:$ANDROID_NDK:$PATH gradle -PDEBUG -PDALVIK_VM=true -PBINARY_STUB=~/work/binary_stub/linux/rt/lib/ext/jfxrt.jar \ -PFREETYPE_DIR=~/work/freetype/install -PCOMPILE_TARGETS=android If everything went all right the output is in build/android-sdk Create first JavaFX Android project Use gradle script int android-tools. The script sets the project structure for you.   Following command creates Android HelloWorld project which links to a freshly built javafx runtime and to a HelloWorld application. NAME is a name of Android project. DIR where to create our first project. PACKAGE is package name required by Android. It has nothing to do with a packaging of javafx application. JFX_SDK points to our recently built runtime. JFX_APP points to dist directory of javafx application. (where all application jars sit) JFX_MAIN is fully qualified name of a main class. gradle -PDEBUG -PDIR=/home/user/work -PNAME=HelloWorld -PPACKAGE=com.helloworld \ -PJFX_SDK=/home/user/work/jfx78/build/android-sdk -PJFX_APP=/home/user/NetBeansProjects/HelloWorld/dist \ -PJFX_MAIN=com.helloworld.HelloWorld createProject Now cd to the created project and use it like any other android project. ant clean, debug, uninstall, installd will work. I haven't tried it from any IDE Eclipse nor Netbeans. Special thanks to Stefan Fuchs and Daniel Zwolenski for the repositories used in this blog post.

    Read the article

  • SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28

    - by pinaldave
    This is another common wait type. However, I still frequently see people getting confused with PAGEIOLATCH_X and PAGELATCH_X wait types. Actually, there is a big difference between the two. PAGEIOLATCH is related to IO issues, while PAGELATCH is not related to IO issues but is oftentimes linked to a buffer issue. Before we delve deeper in this interesting topic, first let us understand what Latch is. Latches are internal SQL Server locks which can be described as very lightweight and short-term synchronization objects. Latches are not primarily to protect pages being read from disk into memory. It’s a synchronization object for any in-memory access to any portion of a log or data file.[Updated based on comment of Paul Randal] The difference between locks and latches is that locks seal all the involved resources throughout the duration of the transactions (and other processes will have no access to the object), whereas latches locks the resources during the time when the data is changed. This way, a latch is able to maintain the integrity of the data between storage engine and data cache. A latch is a short-living lock that is put on resources on buffer cache and in the physical disk when data is moved in either directions. As soon as the data is moved, the latch is released. Now, let us understand the wait stat type  related to latches. From Book On-Line: PAGELATCH_DT Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Destroy mode. PAGELATCH_EX Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Exclusive mode. PAGELATCH_KP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Keep mode. PAGELATCH_SH Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Shared mode. PAGELATCH_UP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Update mode. PAGELATCH_X Explanation: When there is a contention of access of the in-memory pages, this wait type shows up. It is quite possible that some of the pages in the memory are of very high demand. For the SQL Server to access them and put a latch on the pages, it will have to wait. This wait type is usually created at the same time. Additionally, it is commonly visible when the TempDB has higher contention as well. If there are indexes that are heavily used, contention can be created as well, leading to this wait type. Reducing PAGELATCH_X wait: The following counters are useful to understand the status of the PAGELATCH: Average Latch Wait Time (ms): The wait time for latch requests that have to wait. Latch Waits/sec: This is the number of latch requests that could not be granted immediately. Total Latch Wait Time (ms): This is the total latch wait time for latch requests in the last second. If there is TempDB contention, I suggest that you read the blog post of Robert Davis right away. He has written an excellent blog post regarding how to find out TempDB contention. The same blog post explains the terms in the allocation of GAM, SGAM and PFS. If there was a TempDB contention, Paul Randal explains the optimal settings for the TempDB in his misconceptions series. Trace Flag 1118 can be useful but use it very carefully. I totally understand that this blog post is not as clear as my other blog posts. I suggest if this wait stats is on one of your higher wait type. Do leave a comment or send me an email and I will get back to you with my solution for your situation. May the looking at all other wait stats and types together become effective as this wait type can help suggest proper bottleneck in your system. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Make a Drive Image Using an Ubuntu Live CD

    - by Trevor Bekolay
    Cloning a hard drive is useful, but what if you have to make several copies, or you just want to make a complete backup of a hard drive? Drive images let you put everything, and we mean everything, from your hard drive in one big file. With an Ubuntu Live CD, this is a simple process – the versatile tool dd can do this for us right out of the box. We’ve used dd to clone a hard drive before. Making a drive image is very similar, except instead of copying data from one hard drive to another, we copy from a hard drive to a file. Drive images are more flexible, as you can do what you please with the data once you’ve pulled it off the source drive. Your drive image is going to be a big file, depending on the size of your source drive – dd will copy every bit of it, even if there’s only one tiny file stored on the whole hard drive. So, to start, make sure you have a device connected to your computer that will be large enough to hold the drive image. Some ideas for places to store the drive image, and how to connect to them in an Ubuntu Live CD, can be found at this previous Live CD article. In this article, we’re going to make an image of a 1GB drive, and store it on another hard drive in the same PC. Note: always be cautious when using dd, as it’s very easy to completely wipe out a drive, as we will show later in this article. Creating a Drive Image Boot up into the Ubuntu Live CD environment. Since we’re going to store the drive image on a local hard drive, we first have to mount it. Click on Places and then the location that you want to store the image on – in our case, a 136GB internal drive. Open a terminal window (Applications > Accessories > Terminal) and navigate to the newly mounted drive. All mounted drives should be in /media, so we’ll use the command cd /media and then type the first few letters of our difficult-to-type drive, press tab to auto-complete the name, and switch to that directory. If you wish to place the drive image in a specific folder, then navigate to it now. We’ll just place our drive image in the root of our mounted drive. The next step is to determine the identifier for the drive you want to make an image of. In the terminal window, type in the command sudo fdisk -l Our 1GB drive is /dev/sda, so we make a note of that. Now we’ll use dd to make the image. The invocation is sudo dd if=/dev/sda of=./OldHD.img This means that we want to copy from the input file (“if”) /dev/sda (our source drive) to the output file (“of”) OldHD.img, which is located in the current working directory (that’s the “.” portion of the “of” string). It takes some time, but our image has been created…Let’s test to make sure it works. Drive Image Testing: Wiping the Drive Another interesting thing that dd can do is totally wipe out the data on a drive (a process we’ve covered before). The command for that is sudo dd if=/dev/urandom of=/dev/sda This takes some random data as input, and outputs it to our drive, /dev/sda. If we examine the drive now using sudo fdisk –l, we can see that the drive is, indeed, wiped. Drive Image Testing: Restoring the Drive Image We can restore our drive image with a call to dd that’s very similar to how we created the image. The only difference is that the image is going to be out input file, and the drive now our output file. The exact invocation is sudo dd if=./OldHD.img of=/dev/sda It takes a while, but when it’s finished, we can confirm with sudo fdisk –l that our drive is back to the way it used to be! Conclusion There are a lots of reasons to create a drive image, with backup being the most obvious. Fortunately, with dd creating a drive image only takes one line in a terminal window – if you’ve got an Ubuntu Live CD handy! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Bootable Ubuntu USB Flash Drive the Easy WayHow to Browse Without a Trace with an Ubuntu Live CDWipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy WayClone a Hard Drive Using an Ubuntu Live CD TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace

    Read the article

  • mysql show databases not showing databases that are in /opt/bitnami/mysql/data directory

    - by hgolov
    and thank you for taking the time to look at my question. I have an ebs-backed ec2 ubuntu server which is running but unreachable. ** There are very stupidly no recent backups ** I made a snapshot of the block, created a volume, spun up a new instance, attached the new volume. I see all the data from my site in the /opt/bitnami/mysql/data directory, but when I go into the mysql console, it shows only information_schema and test when I type show databases; How can I 'point' mysql to the correct folder? Thank you!

    Read the article

  • Fix: WCF - The type provided as the Service attribute value in the ServiceHost directive could not

    - by Ken Cox [MVP]
    I wanted to expose some raw data to users in my current ASP.NET 3.5 web site project. I created a subdirectory called ‘datafeeds’ and added a WCF Data Service. I wired the dataservice up to the Entity Framework class and, on running the ItemDataService.svc file, was greeted with: The type  <> provided as the Service attribute value in the ServiceHost directive could not be found So why couldn’t it find the class? It was right there in the… oops! Instead of putting the ItemDataService.vb...(read more)

    Read the article

  • ASP.NET Web API - Screencast series Part 3: Delete and Update

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods. In Part 3, we'll start to modify data on the server using DELETE and POST methods. So far we've been looking at GET requests, and the difference between standard browsing in a web browser and navigating an HTTP API isn't quite as clear. Delete is where the difference becomes more obvious. With a "traditional" web page, to delete something'd probably have a form that POSTs a request back to a controller that needs to know that it's really supposed to be deleting something even though POST was really designed to create things, so it does the work and then returns some HTML back to the client that says whether or not the delete succeeded. There's a good amount of plumbing involved in communicating between client and server. That gets a lot easier when we just work with the standard HTTP DELETE verb. Here's how the server side code works: public Comment DeleteComment(int id) { Comment comment; if (!repository.TryGet(id, out comment)) throw new HttpResponseException(HttpStatusCode.NotFound); repository.Delete(id); return comment; } If you look back at the GET /api/comments code in Part 2, you'll see that they start the exact same because the use cases are kind of similar - we're looking up an item by id and either displaying it or deleting it. So the only difference is that this method deletes the comment once it finds it. We don't need to do anything special to handle cases where the id isn't found, as the same HTTP 404 handling works fine here, too. Pretty much all "traditional" browsing uses just two HTTP verbs: GET and POST, so you might not be all that used to DELETE requests and think they're hard. Not so! Here's the jQuery method that calls the /api/comments with the DELETE verb: $(function() { $("a.delete").live('click', function () { var id = $(this).data('comment-id'); $.ajax({ url: "/api/comments/" + id, type: 'DELETE', cache: false, statusCode: { 200: function(data) { viewModel.comments.remove( function(comment) { return comment.ID == data.ID; } ); } } }); return false; }); }); So in order to use the DELETE verb instead of GET, we're just using $.ajax() and setting the type to DELETE. Not hard. But what's that statusCode business? Well, an HTTP status code of 200 is an OK response. Unless our Web API method sets another status (such as by throwing the Not Found exception we saw earlier), the default response status code is HTTP 200 - OK. That makes the jQuery code pretty simple - it calls the Delete action, and if it gets back an HTTP 200, the server-side delete was successful so the comment can be deleted. Adding a new comment uses the POST verb. It starts out looking like an MVC controller action, using model binding to get the new comment from JSON data into a c# model object to add to repository, but there are some interesting differences. public HttpResponseMessage<Comment> PostComment(Comment comment) { comment = repository.Add(comment); var response = new HttpResponseMessage<Comment>(comment, HttpStatusCode.Created); response.Headers.Location = new Uri(Request.RequestUri, "/api/comments/" + comment.ID.ToString()); return response; } First off, the POST method is returning an HttpResponseMessage<Comment>. In the GET methods earlier, we were just returning a JSON payload with an HTTP 200 OK, so we could just return the  model object and Web API would wrap it up in an HttpResponseMessage with that HTTP 200 for us (much as ASP.NET MVC controller actions can return strings, and they'll be automatically wrapped in a ContentResult). When we're creating a new comment, though, we want to follow standard REST practices and return the URL that points to the newly created comment in the Location header, and we can do that by explicitly creating that HttpResposeMessage and then setting the header information. And here's a key point - by using HTTP standard status codes and headers, our response payload doesn't need to explain any context - the client can see from the status code that the POST succeeded, the location header tells it where to get it, and all it needs in the JSON payload is the actual content. Note: This is a simplified sample. Among other things, you'll need to consider security and authorization in your Web API's, and especially in methods that allow creating or deleting data. We'll look at authorization in Part 6. As for security, you'll want to consider things like mass assignment if binding directly to model objects, etc. In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying.

    Read the article

  • Welcoming Karl Grambow to Coeo

    - by Christian
    After a massive search for our next ‘Mission Critical SQL Server DBA’, I’m very pleased to announce that we welcomed Karl Grambow into our team this week! Karl joins us from Microsoft Consulting Services (MCS) in the UK and started his career as a SQL Server 6.5 Developer before moving quickly into the operational DBA space where he’s been ever since. He also dabbles in .NET and SSMS-Addin development and has created a versioning tool called SQLDBControl. Outside of work he enjoys photography and Formula 1 and has recently become a Dad for the second time (congratulations!). Welcome Karl, we’re all looking forward to working with you! Karl will be manning our stand at SQLBits10 this week so if you’ll be there, be sure to say come over and say hi.   Christian Bolton - MCA, MCM, MVP Technical Director http://coeo.com - SQL Server Consulting & Managed Services

    Read the article

  • MySQL Connector/Net 6.4.6 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.4.6, a new version of the all-managed .NET driver for MySQL has been released.  This is a maintenance release and is recommended for use in production environments. It is appropriate for use with MySQL server versions 5.0-5.6. This is intended to be the final release for Connector/NET 6.4. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.4.6 version of MySQL Connector/Net brings the following fixes: - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64409, Oracle bug #13790123). - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error" (MySQL Bug #66578 Orabug #14593547). - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.4.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Why did object-oriented paradigms take so long to go mainstream?

    - by Earlz
    I read this question and it got me thinking about another fairly recent thing. Object oriented languages. I'm not sure when the first one was created, but why did it take so long before they became mainstream? C became vastly popular, but didn't become the object-oriented C++ for years(decades?) later No mainstream language before the 90s was object oriented Object oriented really seemed to take off with Java and C++ around the same time Now, my question, why did this take so long? Why wasn't C originally conceived as an object-oriented language? Taking a very small subset of C++ wouldn't have affected the core language a whole lot, so why was this idea not popular until the 90s?

    Read the article

  • HTTP Basic Auth Protected Services using Web Service Data Control

    - by vishal.s.jain(at)oracle.com
    With Oracle JDeveloper 11g (11.1.1.4.0) one can now create Web Service Data Control for services which are protected with HTTP Basic Authentication.So when you provide such a service to the Data Control Wizard, a dialog pops up prompting you to entry the authentication details:After you give the details, you can proceed with the creation of Data Control.Once the Data Control is created, you can use the WSDC Tester to quickly test the service.In this case, since the service is protected, we need to first edit the connection to provide username details:Enter the authentication details against username and password. Once done, select DataControl.dcx and using the context menu, select 'Run'. This will bring up the Tester.On the Tester, select the Service Node and using context menu pick 'Operations'. This will bring up the methods which you can test:Now you can pick a method, provide the input parameters and hit execute to see the results.

    Read the article

  • Create Second Web Application using the Default port 80 In SharePoint2010

    - by ybbest
    As a SharePoint developer, one of the common tasks is to create SharePoint Web Application. In this post I will show you how to create second Web Application using the default port 80 in SharePoint2010.You need to follow the steps below. 1. Go to Central Admin => Application Management =>Manage web applications and click new Web Application 2. I choose YBBEST as my IIS site name and host header name, change the port number to 80 and leave the rest settings as default. 3. After the web application creation wizard completes, add an entry in the host file located at C:\Windows\System32\Drivers\etc\hosts . 4. Create a root site collection for the new web application. After the site collection is created , you can browse to the site collection using URL http://ybbest.

    Read the article

  • Compress session state in ASP.Net 4.0

    - by nikolaosk
    Hello folks, In this post I would like to talk about a new feature of ASP.NET 4.0 - easy state compression . When we create web-asp.net applications the user must feel that whenever he interacts with the website, he actually interacts with something that can be safely described as an application. What I mean by this is that is that during a postback the whole page is re-created and is sent back to the client in a fraction of a second. The server has no idea what the user does with the page. If we...(read more)

    Read the article

  • How to Automate your Database Documentation

    - by Jonathan Hickford
    In my previous post, “Automating Deployments with SQL Compare command line” I looked at how teams can automate the deployment and post deployment validation of SQL Server databases using the command line versions of Red Gate tools. In this post I’m looking at another use for the command line tools, namely using them to generate up-to-date documentation with every database change. There are many reasons why up-to-date documentation is valuable. For example when somebody new has to work on or administer a database for the first time, or when a new database comes into service. Having database documentation reduces the risks of making incorrect decisions when making changes. Documentation is very useful to business intelligence analysts when writing reports, for example in SSRS. There are a couple of great examples talking about why up to date documentation is valuable on this site:  Database Documentation – Lands of Trolls: Why and How? and Database Documentation Using SQL Doc. The short answer is that it can save you time and reduce risk when you need that most! SQL Doc is a fast simple tool that automatically generates database documentation. It can create documents in HTML, Word or pdf files. The documentation contains information about object definitions and dependencies, along with any other information you want to associate with each object. The SQL Doc GUI, which is included in Red Gate’s SQL Developer Bundle and SQL Toolbelt, allows you to add additional notes to objects, and customise which objects are shown in the docs.  These settings can be saved as a .sqldoc project file. The SQL Doc command line can use this project file to automatically update the documentation every time the database is changed, ensuring that documentation that is always up to date. The simplest way to keep documentation up to date is probably to use a scheduled task to run a script every day. However if you have a source controlled database, or are using a Continuous Integration (CI) server or a build server, it may make more sense to use that instead. If  you’re using SQL Source Control or SSDT Database Projects to help version control your database, you can automatically update the documentation after each change is made to the source control repository that contains your database. To get this automation in place,  you can use the functionality of a Continuous Integration (CI) server, which can trigger commands to run when a source control repository has changed. A CI server will also capture and save the documentation that is created as an artifact, so you can always find the exact documentation for a specific version of the database. This forms an always up to date data dictionary. If you don’t already have a CI server in place there are several you can use, such as the free open source Jenkins or the free starter editions of TeamCity. I won’t cover setting these up in this article, but there is information about using CI servers for automating database tasks on the Red Gate Database Delivery webpage. You may be interested in Red Gate’s SQL CI utility (part of the SQL Automation Pack) which is an easy way to update a database with the latest changes from source control. The PowerShell example below shows how to create the documentation from a database. That database might be your integration database or a shared development database that is always up to date with the latest changes. $serverName = "server\instance" $databaseName = "databaseName" # If you want to document multiple databases use a comma separated list $userName = "username" $password = "password" # Path to SQLDoc.exe $SQLDocPath = "C:\Program Files (x86)\Red Gate\SQL Doc 3\SQLDoc.exe" $arguments = @( "/server:$($serverName)", "/database:$($databaseName)", "/username:$($userName)", "/password:$($password)", "/filetype:html", "/outputfolder:.", # "/project:$args[0]", # If you already have a .sqldoc project file you can pass it as an argument to this script. Values in the project will be overridden with any options set on the command line "/name:$databaseName Report", "/copyrightauthor:$([Environment]::UserName)" ) write-host $arguments & $SQLDocPath $arguments There are several options you can set on the command line to vary how your documentation is created. For example, you can document multiple databases or exclude certain types of objects. In the example above, we set the name of the report to match the database name, and use the current Windows user as the documentation author. For more examples of how you can customise the report from the command line please see the SQL Doc command line documentation If you already have a .sqldoc project file, or wish to further customise the report by including or excluding specific objects, you can use this project on the command line. Any settings you specify on the command line will override the defaults in the project. For details of what you can customise in the project please see the SQL Doc project documentation. In the example above, the line to use a project is commented out, but you can uncomment this line and then pass a path to a .sqldoc project file as an argument to this script.  Conclusion Keeping documentation about your databases up to date is very easy to set up using SQL Doc and PowerShell. By using a CI server to run this process you can trigger the documentation to be run on every change to a source controlled database, and keep historic documentation available. If you are considering more advanced database automation, e.g. database unit testing, change script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have any questions.

    Read the article

  • SQL SERVER – Identifying guest User using Policy Based Management

    - by pinaldave
    If you are following my recent blog posts, you may have noticed that I’ve been writing a lot about Guest User in SQL Server. Here are all the blog posts which I have written on this subject: SQL SERVER – Disable Guest Account – Serious Security Issue SQL SERVER – Force Removing User from Database – Fix: Error: Could not drop login ‘test’ as the user is currently logged in SQL SERVER – Detecting guest User Permissions – guest User Access Status SQL SERVER – guest User and MSDB Database – Enable guest User on MSDB Database One of the requests I received was whether we could create a policy that would prevent users unable guest user in user databases. Well, here is a quick tutorial to answer this. Let us see how quickly we can do it. Requirements Check if the guest user is disabled in all the user-created databases. Exclude master, tempdb and msdb database for guest user validation. We will create the following conditions based on the above two requirements: If the name of the user is ‘guest’ If the user has connect (@hasDBAccess) permission in the database Check in All user databases, except: master, tempDB and msdb Once we create two conditions, we will create a policy which will validate the conditions. Condition 1: Is the User Guest? Expand the Database >> Management >> Policy Management >> Conditions Right click on the Conditions, and click on “New Condition…”. First we will create a condition where we will validate if the user name is ‘guest’, and if it’s so, then we will further validate if it has DB access. Check the image for the necessary configuration for condition: Facet: User Expression: @Name = ‘guest’ Condition 2: Does the User have DBAccess? Expand the Database >> Management >> Policy Management >> Conditions Right click on Conditions and click on “New Condition…”. Now we will validate if the user has DB access. Check the image for necessary configuration for condition: Facet: User Expression: @hasDBAccess = False Condition 3: Exclude Databases Expand the Database >> Management >> Policy Management >> Conditions Write click on Conditions and click on “New Condition…” Now we will create condition where we will validate if database name is master, tempdb or msdb and if database name is any of them, we will not validate our first one condition with them. Check the image for necessary configuration for condition: Facet: Database Expression: @Name != ‘msdb’ AND @Name != ‘tempdb’ AND @Name != ‘master’ The next step will be creating a policy which will enforce these conditions. Creating a Policy Right click on Policies and click “New Policy…” Here, we justify what condition we want to validate against what the target is. Condition: Has User DBAccess Target Database: Every Database except (master, tempdb and MSDB) Target User: Every User in Target Database with name ‘guest’ Now we have options for two evaluation modes: 1) On Demand and 2) On Schedule We will select On Demand in this example; however, you can change the mode to On Schedule through the drop down menu, and select the interval of the evaluation of the policy. Evaluate the Policies We have selected OnDemand as our policy evaluation mode. We will now evaluate by means of executing Evaluate policy. Click on Evaluate and it will give the following result: The result demonstrates that one of the databases has a policy violation. Username guest is enabled in AdventureWorks database. You can disable the guest user by running the following code in AdventureWorks database. USE AdventureWorks; REVOKE CONNECT FROM guest; Once you run above query, you can already evaluate the policy again. Notice that the policy violation is fixed now. You can change the method of the evaluation policy to On Schedule and validate policy on interval. You can check the history of the policy and detect the violation. Quiz I have created three conditions to check if the guest user has database access or not. Now I want to ask you: Is it possible to do the same with 2 conditions? If yes, HOW? If no, WHY NOT? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Policy Management

    Read the article

  • How can I configure the Aiptek T-6000U graphics tablet?

    - by mejpark
    I followed AiptekTablet instructions on the Ubuntu Wiki to configure 11.04 for use with my graphics tablet. I installed the xserver-xorg-input-aiptek package and created two files with the options detailed on the Wiki page above: $ cat /lib/udev/rules.d/69-xserver-xorg-input-aiptek.rules ACTION!="add|change", GOTO="xorg_aiptek_end" KERNEL!="event[0-9]*", GOTO="xorg_aiptek_end" ATTRS{idVendor}=="08ca", ENV{x11_driver}="aiptek", SYMLINK+="input/aiptektablet" LABEL="xorg_aiptek_end" $ cat /usr/share/X11/xorg.conf.d/50-aiptek.conf Section "InputClass" Identifier "pen" MatchProduct "Aiptek|AIPTEK|aiptek" MatchDevicePath "/dev/input/event*" Driver "aiptek" Option "USB" "on" Option "Type" "stylus" Option "Mode" "absolute" Option "zMin" "0" Option "zMax" "511" EndSection The 50-aiptek.conf file was originally called 10-aiptek.conf as in the Wiki, but an Aiptek tablet installation help thread on the Ubuntu Forums, suggested changing 10 to 50. Any ideas? Thank you.

    Read the article

  • Unity3d generating a file in iOS and saving it on a linux machine

    - by N0xus
    I've done a little research and don't know if the following is possible. At the moment I have created a small application in Unity that generates an XML file. This file will be used to help set up my game. It's done in Unity due to it being cross platform with no need to re-write a single line of code. Eventually this will run on an iPad. However, my game will be running on a linux computer and I need to pass over the XML file to the computer that will be running the final game (please don't ask why I'm doing that, it's something I need to do). So what I want to know is the following: Can I generate my XML file on an iPad and have that XML file be saved, and transmitted to a linux machine, without the need to manually copy the file over. If so, how is this possible?

    Read the article

  • Why is Farseer not working in Windows Phone 7/8?

    - by Bryan
    I created a new Windows Phone Game (4.0) project in Visual Studio Express 2012 and added the Farseer project to the solution explorer. But adding a reference to the Farseer project is not working. I always get this error message: A reference to 'Farseer Physics XNA WP7' could not be added. References with different refresh levels are not supported. What is wrong? How can I use Farseer in Windows Phone 7/8? I uploaded the two projects on pastebin. Farseer project: pastebin.com/uyRusHxM Windows Phone project: pastebin.com/s74Gr66y

    Read the article

  • curlftpfs mount disagrees with the fstab

    - by KayakJim
    I am working with curlftpfs to mount a remote FTP directory locally in Kubuntu 12.04 64-bit. I have the following entry in my /etc/fstab: curlftpfs#ftp_user:ftp_password@ftp_server /mnt/nimh fuse ro,noexec,nosuid,nodev,noauto,user,allow_other,uid=1000,gid=1000 0 0 I have created the directory in /mnt with the following: |-> ll /mnt total 4.0K drwxrwxr-x 2 jim fuse 4.0K Jan 6 09:56 nimh/ My user does belong to the fuse group as well: uid=1000(jim) gid=1000(jim) groups=1000(jim),27(sudo),105(fuse) I am able to mount manually without issue but then the /mnt changes to: |-> mount /mnt/nimh |-> ll /mnt total 0 drwxr-xr-x 1 jim jim 1.0K Dec 31 1969 nimh/ However when I attempt to umount /mnt/nimh I receive: umount: /mnt/nimh mount disagrees with the fstab My /etc/mtab looks like: curlftpfs#ftp://ftp_user:ftp_password@ftp_server/ /mnt/nimh fuse ro,noexec,nosuid,nodev,allow_other,user=jim 0 0 I am able to umount the filesystem without issue if I sudo. Any idea what I'm missing in order to be able to unmount without having to use sudo?

    Read the article

  • What’s the opposite of abstraction?

    - by Ollie Saunders
    As I understand it, abstraction is the term we use for when more meaning is created out of something simpler without altering it. It is derived from the latin verb abstrahere (to ‘draw away’). For instance, text is just one abstraction of binary data—as are bitmaps. So, in computers, text and bitmaps exist on top of (are implemented in terms of) binary data. My question is: what is the opposite term? If I want to know the possible more basic things that bitmaps could be implemented in terms of other than binary data—things like tiles for a mosaic or fabric patches for a patchwork quilt—what am I asking for? Is there a word for that? Abstraction has connotations of generalization and the opposite process of that is specialization. IDK whether that helps.

    Read the article

  • quick approach to migrate classic asp project to asp.net

    - by Buzz
    Recently we got a requirement for converting a classic asp project to asp.net. This one is really a very old project created around 2002/2003. It consists of around 50 asp pages. I found very little documentation for this project, FSD and design documents for only a few modules. Just giving a quick look into this project my head start to hurt. It is really a mess. I checked the records and found none of the developers who worked on this project work for the company anymore. My real pain is that this is an urgent requirement and I have to provide an estimated deadline to my supervisor. I found a similar question classic-asp-to-asp-net, but I need some more insight on how to convert this classic asp project to asp.net in the quickest possible way.

    Read the article

  • Find Nearest Object

    - by ultifinitus
    I have a fairly sizable game engine created, and I'm adding some needed features, such as this, how do I find the nearest object from a list of points? In this case, I could simply use the Pythagorean theorem to find the distance, and check the results. I know I can't simply add x and y, because that's the distance to the object, if you only took right angle turns. However I'm wondering if there's something else I could do? I also have a collision system, where essentially I turn objects into smaller objects on a smaller grid, kind of like a minimap, and only if objects exist in the same gridspace do I check for collisions, I could do the same thing, only make the gridspace larger to check for closeness. (rather than checking every. single. object) however that would take additional setup in my base class and clutter up the already cluttered object. TL;DR Question: Is there something efficient and accurate that I can use to detect which object is closest, based on a list of points and sizes?

    Read the article

  • How to propagate http response code from back-end to client

    - by Manoj Neelapu
    Oracle service bus can be used as for pass through casses. Some use cases require propagating the http-response code back to the caller. http://forums.oracle.com/forums/thread.jspa?messageID=4326052&#4326052 is one such example we will try to accomplish in this tutorial.We will try to demonstrate this feature using Oracle Service Bus (11.1.1.3.0. We will also use commons-logging-1.1.1, httpcomponents-client-4.0.1, httpcomponents-core-4.0.1 for writing the client to demonstrate.First we create a simple JSP which will always set response code to 304.The JSP snippet will look like <%@ page language="java"     contentType="text/xml;     charset=UTF-8"        pageEncoding="UTF-8" %><%      System.out.println("Servlet setting Responsecode=304");    response.setStatus(304);    response.flushBuffer();%>We will now deploy this JSP on weblogic server with URI=http://localhost:7021/reponsecode/For this JSP we will create a simple Any XML BS We will also create proxy service as shown below Once the proxy is created we configure pipeline for the proxy to use route node, which invokes the BS(JSPCaller) created in the first place. So now we will create a error handler for route node and will add a stage. When a HTTP BS sends a request, the JSP sends the response back. If the response code is not 200, then the http BS will consider that as error and the above configured error handler is invoked. We will print $outbound to show the response code sent by the JSP. The next actions. To test this I had create a simple clientimport org.apache.http.Header;import org.apache.http.HttpEntity;import org.apache.http.HttpHost;import org.apache.http.HttpResponse;import org.apache.http.HttpVersion;import org.apache.http.client.methods.HttpGet;import org.apache.http.conn.ClientConnectionManager;import org.apache.http.conn.scheme.PlainSocketFactory;import org.apache.http.conn.scheme.Scheme;import org.apache.http.conn.scheme.SchemeRegistry;import org.apache.http.impl.client.DefaultHttpClient;import org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager;import org.apache.http.params.BasicHttpParams;import org.apache.http.params.HttpParams;import org.apache.http.params.HttpProtocolParams;import org.apache.http.util.EntityUtils;/** * @author MNEELAPU * */public class TestProxy304{    public static void main(String arg[]) throws Exception{     HttpHost target = new HttpHost("localhost", 7021, "http");     // general setup     SchemeRegistry supportedSchemes = new SchemeRegistry();     // Register the "http" protocol scheme, it is required     // by the default operator to look up socket factories.     supportedSchemes.register(new Scheme("http",              PlainSocketFactory.getSocketFactory(), 7021));     // prepare parameters     HttpParams params = new BasicHttpParams();     HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);     HttpProtocolParams.setContentCharset(params, "UTF-8");     HttpProtocolParams.setUseExpectContinue(params, true);     ClientConnectionManager connMgr = new ThreadSafeClientConnManager(params,              supportedSchemes);     DefaultHttpClient httpclient = new DefaultHttpClient(connMgr, params);     HttpGet req = new HttpGet("/HttpResponseCode/ProxyExposed");     System.out.println("executing request to " + target);     HttpResponse rsp = httpclient.execute(target, req);     HttpEntity entity = rsp.getEntity();     System.out.println("----------------------------------------");     System.out.println(rsp.getStatusLine());     Header[] headers = rsp.getAllHeaders();     for (int i = 0; i < headers.length; i++) {         System.out.println(headers[i]);     }     System.out.println("----------------------------------------");     if (entity != null) {         System.out.println(EntityUtils.toString(entity));     }     // When HttpClient instance is no longer needed,      // shut down the connection manager to ensure     // immediate deallocation of all system resources     httpclient.getConnectionManager().shutdown();     }}On compiling and executing this we see the below output in STDOUT which clearly indicates the response code was propagated from Business Service to Proxy serviceexecuting request to http://localhost:7021----------------------------------------HTTP/1.1 304 Not ModifiedDate: Tue, 08 Jun 2010 16:13:42 GMTContent-Type: text/xml; charset=UTF-8X-Powered-By: Servlet/2.5 JSP/2.1----------------------------------------  

    Read the article

  • Follow your friends on StackOverflow with FriendOverflow

    - by Mike Grace
    Screenshot About I created this app because I wanted to see what my friends and co-workers were doing on StackOverflow. I was previously going to their profiles to see what they were asking, answering, and commenting on because most of the time I found what they were doing was interesting or relevant to what I was doing. This app is for anyone who visits StackOverflow using their desktop browser and has 'friends' they would like to follow on StackOverflow. Cost Free Download Google Chrome extension http://goo.gl/ooE34 Mozilla Firefox extension http://goo.gl/3Pnqa Bookmarklet http://goo.gl/FkuQW Platform Desktop browsers via Google Chrome extension, Mozilla Firefox extension, and bookmarklet Contact @MikeGrace Code App was built on the Kynetx platform using KRL (Kynetx Rule Language)

    Read the article

  • I made a 2D ENGINE for Android, looking for cooperation.

    - by Roger Travis
    My name is Robert, I am an Android programmer and wanted to show off my latest project - a 2d game engine. You can see it in action here - https://play.google.com/store/apps/details?id=engineDemo.com My engine's main advantage is its ease of use. To have your level up and running, you'll need only 3 lines of code. ABoxView aboxView = new ABoxView(this); setContentView(aboxView); aboxView.loadLevel("level/level02"); Level are created in a special level constructor and object physical properties are stored in a corresponding XML file. I am looking to cooperate with those, who might be interesting in using my engine in their games. You can email me at [email protected] or post here. Thanks, Robert

    Read the article

  • Blogger.com kills FTP

    History (you can safely ignore) Back in 2002 I came across some (almost) free Linux/Apache space and set up my first manually-created HTML-based home page, which still exists: http://www.danielmoth.com/. In 2004 I wanted to have a blog that would be hosted on a sub-folder of my domain, and at the same time I did not want to mess with setting up a blog engine myself. I found the perfect solution in blogger.com, which offered a web interface for creating blog posts (and managing the pages' template)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >