Search Results

Search found 18450 results on 738 pages for 'website attacks'.

Page 250/738 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Does Google sometime prevent new white hat sites from ranking at all in some verticals?

    - by JVerstry
    Assuming someone wants to implement a new viagra or akai berry e-commerce website. There is a lot of competition and this site does not really bring something new, other than a new online counter to buy products at a nice price. Assuming this site does not use any black hat techniques and that it stays with Google quality guidelines, and assuming it has no (or few) backlinks (from non-authoritative websites). Assuming this website's pages are indexed properly in Webmaster Tool, and that no penalties are reported. No site improvements are suggested. Google crawls the site daily as reported in GWT. No robots.txt configuration issues. Does Google sometime decide to no rank this site for any user query (for weeks), because of lack of original content? The reason I am asking this is that I am trying to understand the possible cause of a similar situation I am observing with two sites. If so, what is the way out to start ranking for these site? If not, does it mean the cause is elsewhere for sure? Any confirmed info to get out of the maze is welcome.

    Read the article

  • Devoxx 2011 Started Today

    - by Yolande
    Devoxx 2011, organized by Java user group in Belgium, is the biggest Java conference in Europe. The first two University Days set the tone for the weeklong conference with its in-depth technical sessions lead by luminaries from the Java community and industry experts. Each day is a great mix of 3 hour sessions and hands-on labs, 30 minute Tools-in-Action sessions giving tips for faster and better application development and the traditional Birds-of-a-Feather sessions in the evening. Java sessions for today and tomorrow: - Next Gen Enterprise Apps - Bert Ertman and Paul Bakker talked about new Java EE 6 APIs that reduces the need for boilerplate code and configuration. - JavaFX 2.0 – A Java developer’s guide - Stephen Chin and Peter Pilgrim will give an overview of new version and how Java developers can take advantage of it - Java Rich Clients with JavaFX 2.0 - Richard Bair and Jasper Potts will get into JavaFX 2.0 APIs - Building an end-to-end application using Java EE 6 and NetBeans - Arun Gupta will showcase how to write Java EE 6 applications more effectively. - The OpenJDK Community BOF with Dalibor Topic Starting Tuesday, come by the Oracle booth to chat about technology, enter our raffle and have a beer every day at 18:45 The sessions will be available on Parleys website after the conference. In the meantime, you can learn a lot about those Java technologies on our website: - JavaFX 2.0 tutorials and documentation - OpenJDK - News from the GlassFish community - JavaEE 6 resources - JavaOne sessions

    Read the article

  • IE8 complains about SSL name mistmatch

    - by Cerin
    When visiting an SSL protected website, IE8 complains about the certificate name not matching the website address, but gives no information about the certificate or what name it's looking for. Visiting the same site in IE9 (or IE9 in "IE8 mode"), Firefox, Chrome, and Safari shows no problems, and that the certificate matches the address. Certificate checkers indicate everything is installed and configured correctly. Does anyone know what might be causing this? Is this a known issue or bug in IE8? I've been Googling for similar issues, but due to the uncertainty as to what's actually going on, I'm not sure what to search for. My problem reads similar to this question. However, my server is running Apache2.

    Read the article

  • both IPV4 and IPV6 at the same time over DSL connection?

    - by namiheike
    Let me describe my situation: while I connect computer with the wire, I've got an IPV6 address automatically, there's a "Wired connection" tab in network manager, and I can access an website that support IPV6 (google,facebook,twitter...)with a hosts file, or use the proxy like google.com.sixxs.org But if I want to access the whole internet, I have to create a DSL connection with username and password that ISP gave me. BUT after I change my connection into this DSL connection, I cannot access website over IPV6, even there's the site's ipv6 address in /etc/hosts, then I realize that I lose my IPV6 connection, because the ping6 says connect: Network is unreachable. the problem is, there's no IPV6 tab or options about IPV6 in the configure of DSL connection. It feels like I can only use one connection at the same time, but the DSL doesn't support IPV6 and the wired connection doesn't support IPV4(I mean, there's no way to input the password the ISP gave me) maybe make somebody uncomfortable, but when I work in MS windows, there's no such problem, (maybe just feel like) I can access V4 and V6 at the same time. So how to solve with this? thanks a lot. I'm in 11.10 + gnome3

    Read the article

  • Securing RDP access to Windows Server 2008 R2: is Network Level Authentication enough?

    - by jamesfm
    I am a dev with little admin expertise, administering a single dedicated web server remotely. A recent independent security audit of our site recommended that "RDP is not exposed to the Internet and that a robust management solution such as a VPN is considered for remote access. When used, RDP should be configured for Server Authentication to ensure that clients cannot be subjected to man-in-the-middle attacks." Having read around a bit, it seems like Network Level Authentication is a Good Thing so I have enabled the "Allow connections only from Remote Desktop with NLA" option on the server today. Is this acion enough to mitigate the risk of a Man-in-the-Middle attack? Or are there other essential steps I should be taking? If VPN is essential, how do I go about it?

    Read the article

  • Implicit OAuth2 endpoint vs. cookies

    - by Jamie
    I currently have an app which basically runs two halves of an API - a restful API for the web app, and a synchronisation API for the native clients (all over SSL). The web app is completely javascript based and is quite similar to the native clients anyway - except it currently does not work offline. What I'm hoping to do is merge the fragmented APIs into a single restful API. The web app currently authenticates by issuing a cookie to the client whereas the native clients work using a custom HMAC access token implementation. Obviously a public/private key scenario for a javascript app is a little pointless. I think the best solution would be to create an OAuth2 endpoint on the API (like Instagram, for example http://instagram.com/developer/authentication/) which is used by both the native apps and the web app. My question is, in terms of security how does an implicit OAuth2 flow compare (storing the access token in local storage) to "secure" cookies? Presumably although SSL solves man in the middle attacks, the user could theoretically grab the access token from local storage and copy it to another machine?

    Read the article

  • Creating a backup - Rsync - Connection refused (111)

    - by pablofiumara
    I am trying to create a backup of my website for free. I just want to have a backup of my website, including not only all files and the configuration but also the databases. I mean, a full backup. If it can be done automatically, it would be better. I feel there are better ways than using the cpanel to achieve that (actually, I believe sometimes web hosters does not have any cpanel). I read the following on how to do it: Automatically mirror the entire contents and configuration of your main server to a secondary backup server on a completely separate network in a different data centre. Use RSync, FXP, cPanel voodoo, or whatever method you wish to automate syncing. That is why I installed Rsync Daemon which is an alternative to SSH for remote backups. I configured it but the test went wrong. The terminal is showing me this: pablofiumara@pablofiumara-Lenovo-G470:~$ sudo rsync [email protected]::share [sudo] password for pablofiumara: rsync: failed to connect to pablofiumara.com (50.87.147.75): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9] pablofiumara@pablofiumara-Lenovo-G470:~$ sudo rsync [email protected]::share failed to connect to 50.87.147.7 (50.87.147.7): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9] What should I do? Is there a better or easier way to achieve what I wish (I mentioned this in the first paragraph)?

    Read the article

  • Game Server Colocation

    - by Linuz
    Hello, I am new to colocation. I am looking for a good place to host my server that would have around 5-10 Source Servers (Team Fortress 2). I am looking at maybe around 90 players at a time for now, players coming from all over the United States Could I get more info on what I am suppose to look for exactly? Would it be correct for me to get a 100Mb/s line and does 100Mb/s line in the following example actually be 100Mb/s upload bandwidth? Example: package 4 of FDC's services: http://www.fdcservers.net/server_colocation.php also I want to get something unmettered. I do not want to have to ever worry about going over some bandwidth limit or any DDOS attacks killing me. And if anyone has any other recommendations as to what network configurations I should get or any other good colocation providers that are cheap in price, I would REALLY appreciate the help. Thanks

    Read the article

  • Personalized Pricing

    - by David Dorf
    In past postings I've spent a fair amount of time talking about targeted promotions.  Using a complete view of the customer that includes purchase history, location history, and psychographics gleaned from social media, we can select the offer with the greatest chance of redemption.  This is done to influence shopping behavior, which might be introducing the consumer to a new product line, increasing their basket size, increasing frequency of purchases, etc. Safeway seems to be taking a slightly different approach with their personalized pricing.  In additional to offering electronic coupons and club card offers, they are also providing a personalized price for certain items based on purchase history.  So when Sally want to shop at Safeway, she first checks the "Just for U" website for three types of deals.  She starts by selecting manufacturer coupons to load into her loyalty card, then she checks the Club Card for offers like "buy one get one free." The third step is the interesting one.  Safeway will set a particular lower price for Sally good for 90 days on items she buys often.  Clearly this isn't enforcing a new behavior but rather instilling loyalty.  I would love to know exactly how they are determining the personalized price.  Of course bargain hunters can still stack the three offers so they can, for example, get their $4.99 Oatmeal for $0.72. I like this particular question and answer from their website's FAQ: My offers are not that great. Can I tell you what offers I need? That's a good idea. That functionality is not currently available, but we appreciate your input and are constantly improving our just for U program. Stay tuned for exciting enhancements! I suppose if Safeway is tracking all the purchases, they can easily determine whether the customer if profitable.  As long as the customer stays profitable, why not let them determine a few offers themselves?  Food for thought.

    Read the article

  • which server is best for me??

    - by mathew
    I am looking for a best hosting service for my website. My website is a PHP MySQl driven site which hase got site scrapping for more than 10 websites and near about 8-10 API pursing and some about 150 mb dat file reading(from local hard drive), and also one rss pursing,Live graph from other sites, geo map from Google,Map api from Google and so on. one widget which real-time result for any one who chose the same. so my question is which is best option for me. actually I am considering softlayer cloud as they are pretty cheap and more facilities than rackspace. another option is a dedicated server which has 2 single core processor, 4GB ram and 250GB sata II. with 100 mbps uplink. so please tell me which will be the best option?? I heard that dedicated server has lots of limitations than cloud. but for cloud they uses SAN for storage so I am afraid the reading proces for database may be bit slower...and their basic plan ram is only 1 GB.

    Read the article

  • Legal issues regarding embedding a toolbar into a browser [closed]

    - by OmarOthman
    We are in the process of developing a software that provides service to internet users and we would like to ask about the legal liabilities of some issues. Of course, everything is to be done with the consent of the user of our software but our concern is about third party tools and services that may be invoked/used by our product. In particular, these are the concerns: (1) Embedding a toolbar to an existing browser. This screenshot is an example, where the words in the highlighted toolbar are passed to www.google.com for searching, and the contents of the window are the results of the search. I want to know if any consent should be obtained before such a toolbar can be embedded in a web browser, whether there are any legal requirements by the web browser; whether different web browsers have different requirements (at least for Internet Explorer, Firefox, Chrome, Opera and Safari). (2) Invoking a free website from that toolbar (like Google’s search page). The screenshot above demonstrates such an existing toolbar. (3) Full ownership and unrestricted access to the data entered to this toolbar. In the screenshot above, I want to take the words (translation english to spanish) and own them, i.e. storing them in my database and do some processing on them. (4) Ability to track the pages entered by the user starting from that free website. In the screenshot above, you can notice that the user opted only for the third result, whose URL is translate.google.com. I want to have access to this and all URLs clicked from this page for some processing as well. This is a commercial application, so I need a very concrete, precise and reference-supported answer.

    Read the article

  • Showing content from pages at different URL's (masking), possibly with .htaccess

    - by zigojacko
    If I have URL's like:- domain.com/category/widgets/filter/blue domain.com/category/widgets/filter/red And it is pretty difficult to reconstruct them to something like:- domain.com/category/blue-widgets domain.com/category/red-widgets Is there any way at all that I can use URL rewrites or anything else with .htaccess or on the server to display the URL's as the domain.com/category/blue-widgets on the domain.com/category/widgets/filter/blue page? I've looked into masking URL's but got nowhere and this has been something bugging me for almost 6 months now. Is there any way to achieve what I want to do? FYI: This is a Magento website and the above process, I am wanting to implement for potentially hundreds of URL's. Edit To respond to @kkugelmann's answer:- I couldn't get your proposed RewriteRule to make a difference at all in the .htaccess file so I started testing a few things in this .htaccess tester:- The proposed RewriteRule didn't work in this tester:- However, the following did:- But adding any of these RewriteRule's into the website's .htaccess file did not rewrite the URL at all... Edit2 By the way, if I add [R=301,L] to the end of the URL rewrite rule, it does actually then rewrite the rule, but of course 301 redirects it as well which is unwanted behaviour. Edit3 I found another question with the same issue... And an accepted answer that solved the problem which seemed to be something to do with using mod_proxy and the [P] tag on the rule (if I try this, the page 404's).

    Read the article

  • How can I pass referrer header from my https domain to http domains?

    - by nutcracker
    My website is 100% https. I have links to other http domains. The referrer header is not set when linking from a https page to a http page. From http://en.wikipedia.org/wiki/HTTP_referrer If a website is accessed from a HTTP Secure (HTTPS) connection and a link points to anywhere except another secure location, then the referer field is not sent. I would prefer that other domains can see the referrer so that they know that traffic comes from my domain. Is there a way to force this header or is there another solution? Update I've done some basic testing using a redirect: http page -- link to http --> 301 redirect --> http page = referrer intact https page -- link to https --> 301 redirect --> http page = referrer blank https page -- link to http --> 301 redirect --> http page = referrer blank https page -- link to http --> 302 redirect --> http page = referrer blank The referrer is lost when linking from a https page to a http redirect page on my own domain. So there is no referrer on the redirect.

    Read the article

  • Oracle Traffic Director – download and check out new cool features in 11.1.1.7.0 by Frances Zhao

    - by JuergenKress
    As Oracle's strategic layer-7 software load balancer product, Oracle Traffic Direct is fast, reliable, secure, easy-to-use and scalable; that you can deploy as the reliable entry point for all TCP, HTTP and HTTPS traffic to application servers and web servers in your network. The latest release Oracle Traffic Director 11.1.1.7.0 is available for ExaLogic and Database Appliance! For download and details please visit the Traffic Director OTN website. It this release, we have introduced some major new functionality and improvements. Web application firewall. Oracle Traffic Director supports web application firewalls. A web application firewall (WAF) is a filter or server plugin that applies a set of rules, called rule sets, to an HTTP request. Using a web application firewall, users can inspect traffic and deny requests to protect back-end applications from CSRF vulnerabilities and common attacks such as cross-site scripting. WebSocket Connections. Oracle Traffic Director handles WebSocket connections by default. WebSocket connections are long-lived and allow support for live content, games in real-time, video chatting, and so on. Support for LDAP/T3 Load Balancing. Oracle Traffic Director now supports basic LDAP/T3 load balancing at layer 7, where requests are handled as generic TCP connections for traffic tunneling. It works in full-NAT mode. Please download and try it out. For more information, check out the data sheet and the documentation. For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: traffic director,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Basic Google Analytics Click Tracking and/or Overview

    - by Alan Storm
    This is a really basic Google Analytics question. Apologies in advance if it's not appropriate here, but I've had a lot of luck on Stack Overflow and this seems like the best Stack Exchange site for a question like this. I'm trying to understand how Google Analytics goals work, or if they're the right feature to be using for my situation. Most of the documentation I find online refers to the old version of the UI, not the new one. I have a website, let's call is blog.example.com. This website drives traffic to an ecommerce store, let's call that store.example2.com. I want to get reports on which links from blog.example.com are being clicked through leading to store.example2.com. How do you do this in Google analytics? Are goals the right area to be looking? Do I setup the goals on store.example2.com or blog.example.com? Or both? Is there any canonical user guide (free or paid) that covers how this works? I'm a competent programmer, but it's years since I dealt with conversion tracking on any serious level, and we've progressed well beyond my frozen caveman pixel tracking knowledge. Thanks in advance

    Read the article

  • Blank pale blue screen with Live USB Kubuntu on AMD Sempron 2800+ processor

    - by WGCman
    I am trying to install Kubuntu onto a USB stick to use on my Acer Aspire 1362 laptop with an AMD Sempron 2800+ chip. Using Windows XP, I downloaded and saved to the laptop's hard drive: kubuntu-2.04.1-desktop-i386.iso from the GetKubuntu website and LinuxLive USB Creator 2.8.16.exe from the Linux live website I then installed the latter and ran it, installing the kubuntu onto the Memory stick. Leaving the Bios setup unchanged, the USB stick is ignored and Windows boots. If I change the Bios boot order so that the memory stick takes precedence, I see a dark blue screen announcing Kubukntu 12.04, and on selecting either “live Mode” or “Persistent mode”, messages flash by quickly, some of which appear to be error messages, including “trying to unpack rootfs image as initramfs”, “cannot allocate resource for mainboard”, “no plug and play device found”. Eventually I see a pale blue screen with four moving dots announcing Kubukntu 12.04, similar to the login screen of my Kubuntu desktop, but no invitation to log in or indeed any dialog. After several minutes, this changes to a black screen with more messages including “no caching mode present”, “ADDRCONF(NETDEV_UP): wlan0: link is not ready”, then degrades to a blank pale blue screen which can only be moved by switching the computer off. Finding no way to log the error messages passing by, I managed to photograph most of them, but know no way to attach the photo to this forum. As suggested by User 68186 (to whom thanks!), I have edited my original post to reflect the recent progress, so the following two comments are now superseded.

    Read the article

  • Perform action based on load avg

    - by sfx
    I'm running some web applications on an debian server and have to struggle with ddos attacks sometimes. It's eating up all my resources and I can't ssh anymore into the server. An idea was to drop all connections if the load avg is too high, so there are still resources for me and accept new connections if the load avg is low enough. Since this has to work under heavy load I'm afraid a cronjob wouldn't be fast enough or take too much resources. tl;dr: Is there a way to configure the behavior if the load avg is above a specific threshold?

    Read the article

  • .htaccess https redirect best method

    - by Douglas Cottrell
    I have searched through all the redirects posted buy others and cant quite find the answer to my problem. I have a website with over 3000 pages and we are getting duplication issues within google. We want to keep everything in the parent directory to be http except our contact.php and login.php page. We then have 3 folders that must be secured. admin, clients, customers I have tried using the following code in seperate .htaccess files for each folder, but I keep getting a conflict when I try and I am still trying to find a good solution for the home directory. RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteCond %{REQUEST_URI} admin RewriteRule ^(.*)$ https://www.website.com/$1 [R,L] Any help would be greatly appreciated.

    Read the article

  • Why was my site rejected for Google Adsense?

    - by hyuun jjang
    I have a 3 year old blog and its got around 16 articles/tutorials about some programming problems and solutions. It's getting pretty much a lot of view lately so I decided to apply for a google adsense account. When I first applied via blogger, google replied with the following statement: Page Type: In order to participate in Google AdSense, publishers' websites and application information must satisfy the following guidelines: - Your website must be your own top-level domain (www.example.com and not www.example.com/mysite). - You must provide accurate personal information with your application that matches the information on your domain registration. - Your website must contain substantial, original content... So, as I understood it, I decide to buy a domain and point my blogger blog to that new naked domain. and here is the newly bought domain where all the contents of my old blog resides. http://icodeya.com/ I reapplied, hoping that this time, I will make the cut. But then I got this reply Further detail: Unable to review your site: While reviewing http://www.icodeya.com/, we found that your site was down or unavailable. We suggest you check whether there was a typo in the URL submitted. When your site is operational, you can resubmit your application with the correct site by following the directions below. I'm a bit disappointed. Maybe I did something wrong with DNS configuration or something. But you can clearly see that my site is fully functional. I heard that google sends robots to crawl on to the site etc. It's just sad because I invested on a domain name, and now I can't even find ways to earn from it. Any tips?

    Read the article

  • How to write a blog for SEO purpose

    - by Mathieu Imbert
    I have a photo sharing website, which provides very little textual content. Users can add tags to photos and a description, but it creates a lot of duplicate content, because most of the descriptions will be 'wow', 'lol', ... I don't think I should rely on users to build my SEO. I think it would be a great idea to write a blog, and use it to describe the best photos, start contests, explain themes, in short: create original content that search engines will love. Our website's main URL is like www.domain.com, and our new blog is hosted on blog.domain.com. From a SEO perspective, is it a good idea to keep the blog separate from the main site? This has the advantage to leave the original site unchanged, but will it add any page rank to the www.domain.com? If the blog ranks well it will obviously pass some page rank to the original through links. What do you think is the best option from a SEO perspective? Include the blog in www.domain.com? Or leave it in blog.domain.com?

    Read the article

  • HTTP traffic through PIX VPN from outside site

    - by fwrawx
    I have a remote site with a website that only allows access from the outside IP assigned to our local PIX. I have users connecting to the local networking using a VPN that need to be able to view this remote site. I don't think this works because the packets want to come in and go out over the same (ext) interface. So I'm looking for a way to make this work using the PIX or setting up a service on a server on the local network to act as a middle-man for the HTTP requests. The remote site doesn't support setting up a VPN to our PIX. The remote website is dishing out pages over a non-standard port. Can I use squid or something similar to proxy just one site?

    Read the article

  • Allow from referer for HTTP-basic protected SSL apache site

    - by user64204
    I have an apache site protected by HTTP basic authentication. The authentication is working fine. Now I would like to bypass authentication for users that are coming from a particular website by relying on the HTTP Referer header. Here is the configuration: SetEnvIf Referer "^http://.*.example\.org" coming_from_example_org <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Deny from all Allow from env=coming_from_example_org AuthName "login required" AuthUserFile /opt/http_basic_usernames_and_passwords AuthType Basic Require valid-user Satisfy Any </Directory> This is working fine for HTTP, but failing for HTTPS. My understanding is that in order to inspect the HTTP headers, the SSL handshake must be completed, but apache wants to inspect the <Directory> directives before doing the SSL handshake, even if I place them at the bottom of the configuration file. Q: How could I workaround this issue? PS: I'm not obsessed with the HTTP referer header, I could use other options that would allow users from a known website to bypass authantication.

    Read the article

  • Analytics - Where do my drop offs go?

    - by BadCash
    I have a website set up with Google Analytics (through the Wordpress plugin "Google Analytics for WordPress" by Joos de Valk). When I check out the visitors flow in Google Analytics, it shows something like this: (home) - 43% drop-offs /page-2/ - 10% drop-offs ... etc ... I have also set up events for external links. My main "goal" of the website is to drive traffic to my Android app on Google Play, so I have a couple of different links to that that are all set up as events. Everything seems to be working, my events show up when I go to Content - Events in Google Analytics. However, it seems to me that some percentage of the users that are reported as "drop-offs" in fact have clicked on one of the external links. But there's no info about the reason of those drop-offs in the Visitors flow-chart. I can of course check out each specific event category, event action and set "other" to Content/Page, which (I guess) shows the number of visitors who triggered a specific event on a specific page. It just seems like such a complicated way of going about this! So, is there a way to get a more detailed picture, including events, in the Visitors flow chart? Something like: (home) - 43% drop-offs Event Action: "Google Play"=50%, "Youtube"=10%, (not set)=40%

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • Database Driven Web Application, C# Front-End and F# Back-End meaning

    - by user1473053
    Hi I am an intern working with ASP.NET. My current task is to make a website which will incorporate some jquery viewing features. This project seems to me will be primarily dealing with reading data from a database and making graphs out of them. This will require me to make custom queries from whatever the client is looking at. I think it is going to be what this guy calls an Ad Hoc Query tool My plan for this is to make it a database-driven website. So I can utilize the jquery dynamic viewing capabilities. I stumbled upon the functional programming paradigm and found F#. I read that because of it's functional programming paradigm, it makes it a good language to do asynchronous functions. I read about how you can use this with LINQ to SQL and how easy it is to make queries without actually putting the query language in. I understand the concept of the MVC design pattern. But I don't understand what they mean about C# being the front-end and F# being the back-end. Can someone clarify this to me? Also what are your thoughts about doing this project in this way? Any comments and thoughts are greatly appreciated. I feel as if learning F# will be a great learning experience for me. My guess is that the F# back-end is like the part where it controls the calls to the database. F# is possibly the model part of the design pattern. And C# is the controller. So HTML, Javascript and Jquery stuff will be my View design pattern. Clarify please?

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >