Search Results

Search found 9935 results on 398 pages for 'pages'.

Page 244/398 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • Portal And Content – Introduction (1 of 7)

    - by Stefan Krantz
    The coming post over the next two months will be included in a new series. The idea is to help the reader to understand how to enable a versatile and manageable portal. Each post will go through a specific use case or lifecycle group of events that a Content Driven Portal requires the development team to consider. The current planning is to deliver following subjects, each topic will be enclosed in a separate blog post. Introduction – Introduction to the series of posts and what to expect at the end of the series Components, part 1 – UCM, Site Studio and high level introduction to content templates Components, part 2 – Page Templates and  Navigation model Components, part 3 – Applied Customization Framework for Content Presenter Taskflows Scenario 1 – Enable a Portal for runtime administration Scenario 2 – Enable a Portal for Internationalization Scenario 3 – Enable a Portal for Content Workflows Background This post series has been issued to help customers, partners and consultants to understand the concept of a WebCenter Portal project where the main focus or a majority of the portal has content interaction. Today the most portal installations Oracle WebCenter Portal is involved in have a vast majority of content based pages. Many of the Portal projects have or will run into challenges, to mitigate these challenges the portal and content lifecycle has to be well designed. The coming posts will address the main components that should be involved when creating such scenarios; it will also go into details on the process by describing three solution scenarios. The aim with the scenarios is to give the reader a more hands on understanding of the concept of building and architecting a Content Driven Portal. The selected scenarios are selected based on the most common use cases that we have identified until today.

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • Legal issues regarding embedding a toolbar into a browser [closed]

    - by OmarOthman
    We are in the process of developing a software that provides service to internet users and we would like to ask about the legal liabilities of some issues. Of course, everything is to be done with the consent of the user of our software but our concern is about third party tools and services that may be invoked/used by our product. In particular, these are the concerns: (1) Embedding a toolbar to an existing browser. This screenshot is an example, where the words in the highlighted toolbar are passed to www.google.com for searching, and the contents of the window are the results of the search. I want to know if any consent should be obtained before such a toolbar can be embedded in a web browser, whether there are any legal requirements by the web browser; whether different web browsers have different requirements (at least for Internet Explorer, Firefox, Chrome, Opera and Safari). (2) Invoking a free website from that toolbar (like Google’s search page). The screenshot above demonstrates such an existing toolbar. (3) Full ownership and unrestricted access to the data entered to this toolbar. In the screenshot above, I want to take the words (translation english to spanish) and own them, i.e. storing them in my database and do some processing on them. (4) Ability to track the pages entered by the user starting from that free website. In the screenshot above, you can notice that the user opted only for the third result, whose URL is translate.google.com. I want to have access to this and all URLs clicked from this page for some processing as well. This is a commercial application, so I need a very concrete, precise and reference-supported answer.

    Read the article

  • PHP/MySQL Database application development tool

    - by RCH
    I am an amateur PHP coder, and have built a couple of dozen projects from scratch (including fairly simple e-commerce systems with user authentication, PayPal integration etc - all coded by hand from a clean page. Have also done a price comparison engine that takes data from multiple sites etc.). But I am no expert with OO and other such advanced techniques - I just have a fairly decent grasp of the basics of data processing, logic, functions and trying to optimize code as much as possible. I just want to make this clear so you have some idea of where I'm coming from. I have a couple of fairly large new projects on my plate for corporate clients - both require bespoke database-driven applications with complex relationships, many tables and lots of different front-end functions to manipulate that data for the internal staff in these companies. I figured building these systems from scratch would probably be a huge waste of time. Instead, there must be tools out there that will allow me to construct MySQL databases and build the pages with things like pagination, action buttons, table construction etc. Some kind of database abstraction layer, or system generator, if you will. What tool do you recommend for such a purpose for someone at my level? Open source would be great, but I don't mind paying for something decent as well. Thanks for any advice.

    Read the article

  • How to SEO Optimize Javascript Image Loader?

    - by skibulk
    I am building an image-centric catalog website. It catalogs collectible gaming cards numbering 100,000+ pages. Competitor sites recieve millions of hits each month, so with the possibility of excessive traffic, I need to moderate image bandwidth while also optimizing for image SEO. I'm looking for some tips on doing so. Each page on the site features one card with appropriate tags and descriptions. There are however four images for each card - one on matte cardstock, one on foil cardstock, one digital, and one digital foil. In a world with unlimited bandwidth and no-wait page loads, I'd simply embed all four images on the main product page with titles, alt tags, and captions to rank them according to their version keyword. In reality a javascript gallery image loader seems appropriate. Here is a simplified example of my current code. Would this affect SEO in any way? Should I be doing anything differently? Note that I don't want to create a page for each image as I'd have to duplicate the card tags and descriptions on each one, diluting PR for the main page. Thanks for any insight! <script type="text/javascript"> document.write(' <img src="thumbnail1.jpg" data-src="version1.jpg"> <img src="thumbnail2.jpg" data-src="version2.jpg"> <img src="thumbnail3.jpg" data-src="version3.jpg"> <img src="thumbnail4.jpg" data-src="version4.jpg"> '); </script> <noscript> <img src="version1.jpg"> <img src="version2.jpg"> <img src="version3.jpg"> <img src="version4.jpg"> </noscript>

    Read the article

  • Sharing business logic between server-side and client-side of web application?

    - by thoughtpunch
    Quick question concerning shared code/logic in back and front ends of a web application. I have a web application (Rails + heavy JS) that parses metadata from HTML pages fetched via a user supplied URL (think Pinterest or Instapaper). Currently this processing takes place exclusively on the client-side. The code that fetches the URL and parses the DOM is in a fairly large set of JS scripts in our Rails app. Occasionally want to do this processing on the server-side of the app. For example, what if a user supplied a URL but they have JS disabled or have a non-standard compliant browser, etc. Ideally I'd like to be able to process these URLS in Ruby on the back-end (in asynchronous background jobs perhaps) using the same logic that our JS parsers use WITHOUT porting the JS to Ruby. I've looked at systems that allow you to execute JS scripts in the backend like execjs as well as Ruby-to-Javascript compilers like OpalRB that would hopefully allow "write-once, execute many", but I'm not sure that either is the right decision. Whats the best way to avoid business logic duplication for apps that need to do both client-side and server-side processing of similar data?

    Read the article

  • Overwhelmed by complex C#/ASP.NET project in Visual Studio 2008

    - by Darren Cook
    I have been hired as a junior programmer to work on projects that extend existing functionality in a very large, complex solution. The code base consists of C#, ASP.NET, jQuery, javascript, html and xml. I have some knowledge of all these in addition to fair knowledge of object-oriented programming and its fundamental concepts of inheritance, abstraction, polymorphism and encapsulation. I can follow code up through its base classes, interfaces, abstract classes and understand a large part of the code that I read while doing this. However, this solution is so humongous and so many things get tied together whenever I navigate through the code that I feel absolutely overwhelmed. I often find myself unable to fully follow everything that is going on with objects being serialized, large amounts of C# and javascript operating on the same pages and methods being called from template files that consist mainly of markup. I love learning about code, but trying to deal with this really stresses me out. Additionally, I do know that a significant amount of unit testing has been done but I know nothing about unit testing or how to utilize it. Any advice anyone could offer me regarding dealing with a large code base while using Visual Studio 2008 would be greatly appreciated. Are there tools that I can use to help get a handle on what is going on? Perhaps there are things even in Visual Studio that I am not aware of. How can I follow the code to low level functionality in order to get a better grasp of what is going on at a high level?

    Read the article

  • HTTP Protocal

    I have worked with the HTTP protocal for about ten years now and I have found it to be incredibly usefull for transfering data espicaly for remote systems and regardless of the network enviroment. Prior to the existance of web services, developers use to use HTTP to screen scrap data off of web pages in order to interact with remote systems, and then process the data as they needed. I use to use the HTTPWebRequest and HTTPWebRespones classes in order to screen scrap data from various sites that had information I needed to use if no web service was avalible. This allowed me to call just about any webpage and grab all of the content on the page. Below is piece of a web spider that I build about 5-7 years ago. The spider uses the HTTP protocal to requst webpages and then parse the data that is returned.  At the time of writing the spider I wanted to create a searchable index of sites I frequented. // C# 2.0 Framework// Creating a request for a specfic webpageHttpWebRequest webreq = (HttpWebRequest)WebRequest.Create(_Url); // Storeing the response of the webrequestwebresp = (HttpWebResponse)webreq.GetResponse(); StreamReader loResponseStream = new StreamReader(webresp.GetResponseStream()); _Content = loResponseStream.ReadToEnd(); // Adjust the Encoding of Responsestring charset = "";EncodeString(ref _Content, ref charset);loResponseStream.Close(); //Parse Data from the Respone_Content = _Content.Replace("\n", "");_Head = GetTagByName("Head", _Content);_Title = GetTagByName("title", _Content);_Body = GetTagByName("body", _Content);

    Read the article

  • Making Modular, Reusable and Loosely Coupled MVC Components

    - by Dusan
    I am building MVC3 application and need some general guidelines on how to manage complex client side interaction between my components. Here is my definition of one component in general way: Component which has it's own controller, model and view. All of the component's logic is placed inside these three parts and component is sort of "standalone", it contains it's own form, data needed for interaction, updates itself with Ajax and so on. Beside this internal logic and behavior of the component, it needs to be able to "Talk" to the outside world. By this I mean it should provide data and events (sort of) so when this component gets embedded in pages can notify other components which then can update based on the current state and data. I have an idea to use client ViewModel (in java-script) which would hookup all relevant components on page and control interaction between them. This would make components reusable, modular - independent of the context in which they are used. How would you do this, I am a bit stuck as I do not know if this is a good approach and there is a technical possibility to achieve this using java-script/jquery. The confusing part is about update via Ajax, how to ensure that component is properly linked to ViewModel when component is Ajax updated (or even worse removed or dynamically added). Also, how should this ViewModel be constructed and which technicalities to use here and in components to work as synergy??? On the web, I have found the various examples of the similar approach, but they are oversimplified (even for dummies) or over specific and do not provide valuable resource or general solution for this kind of implementation. If you have some serious examples it would be, also, very helpful. Note: My aim is to make interactions between many components on the same page simpler and more robust and elegant.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • DualLayout OpenSourceFood demo site installation instructions

    - by svdoever
    We released DualLayout which enables advanced web design with the power of SharePoint. DualLayout and a demo site can be downloaded from the DualLayout product page. This blogpost contains detailed instructions on installing the demo site. The demo site is based on the site http://opensourcefood.com. The demo site requires internet access because it still links to pages and resources of the real site. Execute the following steps to install the demo site: Copy the OpenSourceFoodDemo.zip file to your SharePoint Server 2010 Make sure that the zip file in “unblocked”, otherwise files are assumed from other computer (right-click on zip file, press “Unblock” button if available) Unzip the OpenSourceFoodDemo.zip to a folder of your choice (c:\OpenSourceFoodDemo) Open the SharePoint  Start->Microsoft SharePoint 2010 Products->SharePoint 2010 Management Shell Change directory to the unzip folder (cd c:\OpenSourceFoodDemo) Start install script: .\InstallDemoSite.ps1 Answer the questions, default values in most cases ok. A little guidance: Question: Give credentials for the account that will be used for the application pool Answer: use for example same account as used for the application pool of your SharePoint site (lookup in IIS Manager) Question: Give credentials for the account that will be used for the application pool Answer: Use same account you are currently logged in with The demo site is made available through a backup and restore. The SharePoint Server 2010 installation must be patched to a level equal or higher to the update level on the SharePoint Server used to create the backup. If you get errors with respect to restore check http://technet.microsoft.com/en-us/sharepoint/ff800847.aspx for downloading the latest cumulative update.

    Read the article

  • Unable to print login-required images in IE

    - by Tim Fountain
    I have some images in a section of a site that require the user to be logged in in order to view. These images are served by a PHP script, which checks the user's login state and if valid, serves the binary data with the appropriate headers. This all works fine. The issue comes when a user tries to print one of these images. In Internet Explorer, when they go to print preview they get the broken image box with a red cross in the corner instead of the actual file. This is what gets printed also. All other browsers can print the images without issue. I have some images elsewhere on the site that are also served via. PHP but these don't require a login. These print fine. The PHP-powered HTML pages on the site that require a login also print fine in IE. It's just login-required images. The user hitting print preview does not seem to result in additional HTTP request to the server for the file. However I do see an additional HTTP request a few seconds later that comes from the same IP (may or may not be related), This request includes no host header, no REQUEST_URI and no user agent. The 'please login' page sends an appropriate 403 header. I've also added a far-in-future expires header to the image response itself to ensure that browsers can serve/print the files from their own cache but this hasn't made any difference. Why can't IE print the images and what else can I do to investigate or fix the problem?

    Read the article

  • How to overcome politics of the net (Google translate code refuses to work from a specific region) [closed]

    - by Jawad
    Possible Duplicate: How to overcome politics of the net (Google translate code refuses to work from a specific region) I have this Web Site. It uses the Google Translate API (Can't post the link, does not open from this region) with the following code. <meta name="google-translate-customization" content="9f841e7780177523-3214ceb76f765f38-gc38c6fe6f9d06436-c"></meta> <script type="text/javascript"> function googleTranslateElementInit() { new google.translate.TranslateElement({pageLanguage: 'en'}, 'google_translate_element'); } </script> <script type="text/javascript" src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script> The problem is since this, it just stopped working. On the site you can see that I had to actually remove the above from here, here, and here while left it here, here, here and here. This is so because the the web site "refuses" to load at all with the pages that have the code (i.e., from this region.) If I use Firefox Stealthy Plugin and open the site in Firefox, It works like a charm without any problems. But with Google Chrome, Apple Safari and Opera Web browser, the site does not load/open at all because of the Google translate. (I know this because If I remove the Google Translate Code, the site works/loads fine) It was one thing to program for "cross browser compatability" and alltogether another to program for "cross region compatability". What can I do to make sure that the site works from anywhere? Do I completely remove the Google Translate code and just have to do without the additional functionality or Do I look for alternatives like this or according to this?

    Read the article

  • How does Wikipedia's SEO work?

    - by Josh Siegl
    i'm sorry if this question is misplaced or doesn't belong here. I'm currently developing an app for android and IOS and of course i'm thinking about the best ways to market it. Last night I Google'd somebody else's app and the third link in was a Wikipedia page on it, I never even thought of apps having Wikipedia pages, but alas there it was. And of course it was very helpful in determining exactly what the app did and in what cases it was useful for (something that's absolutely crucial for potential customers to understand). So then I got to thinking that I should create a Wiki for my app, but how does Wikipedia apply SEO? I know that the question could be overly complicated or specific, i'm just looking for general answers. For instance when somebody Google's my app, where does Wikipedia display on the results? When I create a Wiki for my app, how do I ensure that the Wikipedia page shows in the search results (is there any way to do that? ) I'm sure i'll find all of this out later when I create a Wiki for my app, I guess i'm just asking this out of curiosity. So how does Wikipedia's search engine optimization work? (on a page by page basis)

    Read the article

  • Designing spawning system

    - by Vlad
    I played this game recently http://www.kongregate.com/games/JuicyBeast/knightmare-tower and I am amazed by the way how different monsters are beign spawned. I personally developed my own shooter game and I added time based but also count based spawing system. By count based I mean when there are 5 enemies on stage stop spawning. But this is one example. My question is how are these spawning mechanism built, is there some pattern or some theory how they are built? Are there some online materials/pages where I can improve my knowledge? To sumarize, let just say we have 6 types of monsters. I start the game and kill of monsters of type 1,2 and 3 all the time. Once I pass the first ceiling, like in the game above, monster type 4 appear. ANd so on. As I progress trough the game, the same system of 6 types of monsters stay, but they become more and more resilient and dangerous. So I must also improve to be able to destroy the same monsters but now stronger. My question is simple, are there some theories built or written for developing this type of inteligent systems? Note: This is a general question, not tied up with some game or how exactly should the game work. I am capable to program my own mechanisms but I think I need some help. Thanks.

    Read the article

  • How can I tell GoogleBot that a subdirectory is now a subdomain?

    - by cwd
    I had about a million pages of a catalog indexed under a subdirectory, and now that's moved to a subdomain. GoogleBot is crawling each one of them and getting a 301 redirect to the new location. Even though I have set up the redirect rule in the apache sites-enabled configuration file, (i.e. it's early on when apache does the redirect - PHP is not even getting loaded), even though I have done that, the server isn't handling the load well. GoogleBot is making around 5 requests per second, and on top of my normal traffic that is hiking up the CPU for a few hours at a time. I checked in Webmaster Tools and the corresponding documentation for a way to let Google know that the content had been moved from a subdirectory to a subdomain, but with little luck. Basically the most helpful thing I saw said to just send 301 headers for the new location. How can I tell GoogleBot that a subdirectory is now a subdomain? If that is not an option, how can I more efficiently send 301 redirects out for a particular subdomain? I was thinking perhaps the Nginx server but I'm not sure that I can run both Apache and Nginx side by side on port 80 for different subdomains.

    Read the article

  • Creating a Website Without a Framework [closed]

    - by James Jeffery
    I've been using PHP Frameworks for so long that I've actually forgot the "best practices" for create websites without one. Usually I will use Symfony, or more recently I've been using Laravel. A client wants a very simple website, but with certain parts of it dynamic. Due to the nature of the site using Wordpress, or a Framework, is out of the question. I'm a sucker for priding myself on my code, but I feel like I'm asking such a basic question that it's killing me to ask. But, what are the best practices for creating websites without a Framework? I like to live by the K.I.S.S (Keep It Simple Stupid!) method of thinking. So, my idea was to just create the .php pages that are required, do any page processing or database interaction on that page, then have the HTML below the closing PHP tag. I would have any helpers/functions in a functions.php file. This is what I remember doing way before I was using Frameworks, and to me it seems like a very old school way of doing things. I've not created a site without a Framework for literally 2+ years, so I've lost my way with the basics. Any advice would be greatly appreciated.

    Read the article

  • Learning Programming from scratch

    - by David542
    I am entirely new to programming, other than basic HTML/CSS knowledge. I want to learn programming as quickly and efficiently as possible, and I'm willing to put in the time (at least 70 hours a week). The reason I want to learn is because I have a startup that I've written a business plan for and have prototyped in Photoshop (both front-end and back-end pages). My goals is to have a prototype of the site up within 6 months. I have a good aptitude for math (A's in all math courses up through DiffEq and Linear Algebra). I assume learning programming from scratch can be a daunting task -- not because it is particularly difficult, but because there are so many areas and so much information. I want to make sure that I learn as efficiently as possible and have individuals (in addition to Google) to solicit advice from and that will help me when I get stuck or have questions. I know with other's help, my learning experience will be both more productive and enjoyable. What is the best way to find people that will help me in this? What are some good 'live' resources in addition to asking questions on Stack Overflow? Thank you very much for your time and help.

    Read the article

  • Back button after doing posts on the same page

    - by user441521
    I have 3 pages to my site. The 1st page allows you to select a bunch of options. Those options get sent to the 2nd page to be displayed with some data about those options. From here I can click on a link to get to page 3 on 1 of the options. On page 3 I can create a new/edit/delete all on the same page where reloads come back to page 3. I want a "back" button on page 3 to go back to page 2, but retain the options it had from the original page 1 request. Page 1 has a bunch of check boxes which are passed to page 2 as arrays to the controller. My thought is that I have to pass these arrays (I converted them to lists) to page 3 (even thought page 3 directly doesn't need them) so that page 3 can use them in the back link to it can recreate the values page 1 sent to page 2 originally. I'm using asp.net MVC and when I pass the converted array it seems to convert it to the type instead of actually showing the values: "types=System.Collections.Generic.List" (where types is a List. Is this what is needed or are there other options to getting a "back" button in this case to go back to page 2. It's sort of a pain to pass information to page 3 that isn't really relevant to page 3 except the back button.

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

  • SharePoint 2010 Video Training

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Yes, the DVD is finally available. This is an exhaustive 14 hour video course that Carl and I recorded back in April. It is an end-to-end overview of SharePoint 2010. You can view more details including ordering information about the DVD here. And if you’re interested, a SharePoint 2007 video training version is also available. Carl and I worked quite hard on putting these together, so we hope you enjoy these. Detailed Table of Contents: Introduction (13:49) 30,000 Foot Overview (42:07) Application Management (43:35) User Experience (16:00) Writing Code Part 1 (1:07:49) Writing Code Part 2 (34:41) Simple Web Parts (14:01) Visual Web Parts (6:35) Pages (35:02) Putting it All Together (29:13) Client Side Technology (49:19) ADO.NET Data Services (51:29) Custom Data Services (43:30) Managing Data (29:02) Managing Data: Content Types (17:11) Managing Data: Events (19:22) Managing Data: List Scalability (35:51) Managing Data: Querying (20:07) Enterprise Content Management: DocumentIDs and Document Sets (16:44) Enterprise Content Management: Metadata Infrastructure (22:13) Enterprise Content Management: Record Management (26:27) Enterprise Content Management: Content Organizer (7:21) Enterprise Content Management: Enterprise Content Types (11:21) Business Connectivity Services (BCS) in the SharePoint Designer (26:09) BCS in Visual Studio (9:57) Workflows in the SharePoint Designer (22:07) Workflows in Visual Studio (19:01) Business Intelligence (21:14) Excel (15:25) Performance Point (24:37) Security: Claims-Based Authentication (27:13) Security: Secure Store Service (11:04) Security: The SharePoint Object Model (11:16) Comment on the article ....

    Read the article

  • Are web application usability issues equal to website usability issues?

    - by Kor
    I've been reading two books about web usability issues and tests (Rocket Surgery Made Easy¹ and Prioritizing Web Usability²) and they claim some strategies and typical problems about website usability and how to lead them. However, I want to do a web application, and I think I lost track of what I am trying to solve. These two books claim to work with raw websites (e-commerce, business sites, even intranet), but I'm not sure if everything about web usability is applicable to web application usability. They sure talk about always having available (and usable) the Back button, to focus on short information rather than big amounts of text, etc., but they could be inaccurate in deeper problems that may be easier (or just skippable) in regular websites. Has anybody some experience in this field and could tell me if both web applications and websites share their usability issues? Thanks in advance Edit: Quoting Wikipedia, a website is a collection of related web pages containing images, videos or other digital assets, and a web application is an application that is accessed over a network such as the Internet or an intranet. To sum up, both shows/lets you search/produce information but websites are "simple" in interaction and keep the classics of websites (one-click actions) and the other one is closer to desktop applications in the meaning of their uses and ways of interaction (double click, modal windows, asynchronous calls [to keep you in the same "environment" instead of reloading it] etc.). I don't know if this clarifies the difference. Edit 2: Quoting @Victor and myself, a website is anything running in your browser, but a web application is somewhat running in your browser that could be running in your desktop, with similar behaviors and features. Gmail is a web application that could replace Outlook. GDocs could replace Office. Grooveshark could replace your music player, etc.

    Read the article

  • Page appears indexed in Google but not findable for any search terms?

    - by Jeff Atwood
    (Note that I am going to use screenshots here because I suspect writing about this will change the behavior over time.) If you do a Google search for uiviewcontroller best practices either with or without the quotes, you end up with results like this: Note that none of these pages resolve to the actual Stack Overflow question containing those words in the title. They resolve to either a) sites that are mirroring our creative commons data and correctly pointing back to the source question without nofollow, as properly specified by our attribution requirements or b) our own internal links to the question, but not the actual question itself. The actual page with the title ... Custom UIView and UIViewController best practices? ... does exist at this URL ... http://stackoverflow.com/questions/3300183/custom-uiview-and-uiviewcontroller-best-practices ... and apparently it is present in Google's index! But why does it not appear when we search for uiviewcontroller best practices ? We know that Google contains this page in its index Our search terms match the title of the question Stack Overflow has much higher pagerank than the other sites that are mirroring this question under Creative Commons I don't get it. What are we doing wrong here?

    Read the article

  • ADF Essentials - Available for free and certified on GlassFish!

    - by delabassee
    If you are an Oracle customer, you are probably familiar with Oracle ADF (Application Development Framework). If you are not, ADF is, in a nutshell, a Java EE based framework that simplifies the development of enterprise applications. It is the development framework that was used, among other things, to build Oracle Fusion Applications. Oracle has just released ADF Essentials, a free to develop and deploy version of Oracle ADF's core technologies. As a good news never come alone, GlassFish 3.1.2 is now a certified container for ADF Essentials! ADF Essentials leverage core ADF features and includes: Oracle ADF Faces - a set of more than 150 JSF 2.0 rich components that simplify the creation of rich Web user interfaces (charting, data vizualization, advanced tables, drag and drop, touch gesture support, extensive windowing capabilities, etc.) Oracle ADF Controller - an extension of the JSF controller that helps build reusable process flows and provides the ability to create dynamic regions within Web pages. Oracle ADF Binding - an XML-based, meta-data abstraction layer to connect user interfaces to business services. Oracle ADF Business Components – a declaratively-configured layer that simplifies developing business services against relational databases by providing reusable components that implement common design patterns. ADF is a highly declarative framework, it has always had a very good tooling support. Visual development for Oracle ADF Essentials is provided in Oracle JDeveloper 11.1.2.3. Eclispe support is planned for a later OEPE (Oracle Enterprise Pack for Eclipse) release. Here are some relevant links to quickly learn on how to use ADF Essentials on GlassFish: Video : Oracle ADF Essentials Overview and Demo Deploying Oracle ADF Essentials Applications to Glassfish OTN : Oracle ADF Essentials Ressources

    Read the article

  • Which Bliki (Blog+Wiki) solution can you recommend?

    - by asmaier
    I'm searching for a good Bliki solution, meaning a combination of blog and wiki that I can install on my own web space. I would like to be able to write articles in the wiki style much like with media wiki. So I want to use a wiki markup language, have a revision history, comments, internal links to other pages (maybe in other languages) and be able to collaboratively edit the articles. On the other side I would like to have a blog-like view on my articles, showing new articles (and changes to existing articles) in a time ordered fashion. It would be nice if it would be possible to search through the articles and also tag the articles, so one could generate a tag cloud for the articles. A nice feature would also be to be able to order the articles according to views or even a voting system for the articles. Good would also be a permission system to keep certain articles private, showing them only to people logged in to the platform. Apart from these nice to have features an absolute must have feature for the Bliki platform I'm searching is the possibility to handle math equations (written in LaTeX syntax) and display them either as pictures like media wiki or even better using Mathjax. At the moment I'm using a web service called wikiDot which offers some of the mentioned features, however the free version shows to much advertisements, the blog feature is not mature, the design is quite ugly and loading of the page is often slow. So I want to install a Bliki solution on my own webspace. Can you recommend any solution for that?

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >