Search Results

Search found 35692 results on 1428 pages for 'build process'.

Page 367/1428 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • The Home Stretch: NetBeans IDE 7.1 Release Candidate

    - by TinuA
    The first release candidate build of NetBeans IDE 7.1 is live and available for download, which means the big release (GA) is expected any day soon.NetBeans IDE 7.1 delivers support for JavaFX 2.0, enabling the full compile, debug and profile development cycle for JavaFX 2.0 applications and keeping developers in sync with the latest from the Java platform. Beyond JavaFX support, 7.1 also provides significant Swing GUI Builder enhancements, CSS3 support, and visual debugging tools for JavaFX and Swing user interfaces. And Git--a much anticipated featured--has been integrated into the IDE."The entire NetBeans team is tremendously excited about this release, which provides developers with more state-of-the-art tools for building front-end clients," says NetBeans Engineering Director John Jullion-Ceccarelli. "Whether you are doing JavaFX, HTML5, Swing, or JSF, NetBeans 7.1 will let you quickly and easily develop great-looking and full-featured clients for your Java or PHP-based applications."But there's one more task to check off before the general availability: The NetBeans team has launched a Community Acceptance Survey to get user feedback about the release candidate. Download the RC build, test it and take the survey to let the team know if NetBeans IDE 7.1 is ready for its debut!

    Read the article

  • Package libxul not fount - Kiwix Wikpedia in Ubuntu Precise 12.04

    - by JHOSmAN
    I'm trying to install the service Kiwix but I need a library that is not available for Ubuntu 12.04 LTS Precise leave the log and if someone could tell me how to install Seller would appreciate. kiwix-0.9# ls aclocal.m4 COMPILE config.sub COPYING install-sh ltmain.sh missing static AUTHORS config.guess configure depcomp kiwix Makefile.am README CHANGELOG config.log configure.ac desktop libxul-dev_1.8.1.16+nobinonly-0ubuntu1_all.deb Makefile.in src root@ubuntu-MM061:/home/ubuntu/Escritorio/kiwix-0.9# ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking for g++... g++ checking for cl... no checking for cl... no checking for Xcode... no checking for jar... jar checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a sed that does not truncate output... /bin/sed checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for ar... ar checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking whether we are using the GNU C++ compiler... (cached) yes checking whether g++ accepts -g... (cached) yes checking dependency style of g++... (cached) gcc3 checking how to run the C++ preprocessor... g++ -E checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking for ld used by g++... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld) supports shared libraries... yes checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking for ranlib... (cached) ranlib checking whether make sets $(MAKE)... (cached) yes checking for pkg-config... pkg-config checking for perl... perl checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdint.h... (cached) yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for strings.h... (cached) yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking for unistd.h... (cached) yes checking wchar.h usability... yes checking wchar.h presence... yes checking for wchar.h... yes checking for stdbool.h that conforms to C99... yes checking for _Bool... no checking for inline... inline checking for int16_t... yes checking for int32_t... yes checking for int64_t... yes checking for int8_t... yes checking for off_t... yes checking for pid_t... yes checking for size_t... yes checking for uint16_t... yes checking for uint32_t... yes checking for uint64_t... yes checking for uint8_t... yes checking for ptrdiff_t... yes checking vfork.h usability... no checking vfork.h presence... no checking for vfork.h... no checking for fork... yes checking for vfork... yes checking for working fork... yes checking for working vfork... (cached) yes checking for stdlib.h... (cached) yes checking for GNU libc compatible malloc... yes checking for working strtod... yes checking for getcwd... yes checking for gettimeofday... yes checking for memmove... yes checking for memset... yes checking for pow... yes checking for regcomp... yes checking for sqrt... yes checking for strcasecmp... yes checking for strchr... yes checking for strdup... yes checking for strerror... yes checking for strtol... yes Package libxul was not found in the pkg-config search path. Perhaps you should add the directory containing libxul.pc' to the PKG_CONFIG_PATH environment variable No package 'libxul' found Package libxul was not found in the pkg-config search path. Perhaps you should add the directory containinglibxul.pc' to the PKG_CONFIG_PATH environment variable No package 'libxul' found checking for /stable... no checking for "/nsISupports.idl"... no configure: error: unable to find nsISupports.idl apt-get install libxul Leyendo lista de paquetes... Hecho Creando árbol de dependencias Leyendo la información de estado... Hecho E: No se ha podido localizar el paquete libxul

    Read the article

  • Need hard disk recommendation for linux home server.

    - by neotracker
    Hello, I'm planing to build a little linux homeserver. It will mainly be used for storage and maybe as an media pc. I plan to build a software raid5 with 4 1.5TB or 2TB hard drives. I already decided to use the Western Digital Caviar Green 1.5 TB drive, but then I read about some problems with the WD green series about many drives failing and that they are not recommended for raid anyway. Of course, I couldn't find much facts on the issues so I thought I just ask here ;-) What hard drives would you recommended for a software raid5 setup? As I only need it for storage, the whole thing doesn't have to be too fast. So I prefer a cheap price and silence to great performance.

    Read the article

  • Oracle is Child&rsquo;s Play&hellip;in NSW

    - by divya.malik
    A few weeks ago, my colleague Michael Seback posted a blog entry on Oracle’s acquisition of Haley.  We recently read  an interesting report from Down Under, and here was our press release on the  implementation of Oracle’s Policy automation software in New South Wales, which I thought I would share. We always love hearing about our software “at work”, and especially in the Public Sector- social services area, where it makes a big difference to people’s lives. Here were some of the reasons, why NSW chose Oracle software: “One of the things Oracle’s Policy Automation system is good at is allowing you take decision  trees and rules that are obviously written in English and code them up using very much a natural language approach,” said Holling (CIO for Human Services). “So it was quite a short process to translate the final set of rules that were written on paper into business rules that were actually embedded in the system.” “Another reason why we chose Oracle’s Automation tool is because with future versions of Siebel it comes very tightly integrated with that. It allows us to then to basically take the results of the Policy Automation survey and actually populate our client management system database with that information,” said Holling. As per Surend Dayal, North America VP, Oracle’s Policy automation has applications across a wide range of industries, including public sector—especially health and human services—also financial services, insurance, and even airline rewards programs. In other words, any business process that requires consistent, accurate decision-making where complex legislation and/or internal policies are involved. Click here to read more about Oracle and Haley.

    Read the article

  • Xorg becomes unkillable at 3AM

    - by chew socks
    Most nights, some time in the hour of 3AM my xorg process will increase to 100% cpu and gpu load will also increase to 100%. The process also becomes unkillable. I cannot sudo kill -9 it or get back control with sudo service lightdm restart. I also cannot switch to to a tty screen with ctrl + alt + f1. To reboot I have to log in with ssh, but this is not perfect because if I reboot while it is doing this my ZFS pool will fail to mount when it comes back up ( that is where my /home is ). Does anyone have any ideas as to why I can't stop and restart xorg, or even better, know why this is happening? Thanks NOTE: For anyone who comes looking for the same problem. I disabled catalyst AI and made it through the night. I've been up for 1 day 3 hours now. My record for this month is 2 days and 19 hours without a problem. My all time record is 6 days without a crash. I'll post here if it crashes again or I'm able to set a new record.

    Read the article

  • Likeliness of obtaining same IP address after restarting a router

    - by ?affael
    My actual objective is to simulate logged IPs of web-site users who are all assumed to use dynamically assigned IPs. There will be two kinds of users: good users who only change IP when the ISP assignes a new one bad users who will restart their router to obtain a new IP So what I would like to understand is what assignment mechanics are usually at work here deciding from what pool of IPs one is chosen and whether the probability is uniformly distributed. I know there is no definite and global answer as this process can be adjusted be the ISP but maybe there is something like a technological frame and common process that allows some plausible assumptions. UPDATE: A bad user will restart the router as often as possible if necessary. So here the central question is how many IP changes on average are necessary to end up with a previously used IP.

    Read the article

  • Oh snap! My RPi was upgraded to 512MB! Woo-hoo!

    - by hinkmond
    I ordered a Raspberry Pi Model B (256MB) over 4 months ago on backorder. When it finally came I saw it was upgraded to the new half a gig model! Woot! But, all was not perfect. Gary C. told me the shipped configuration of the new RPi models didn't have the right firmware for 512MB, and I had to upgrade the start.elf in the /boot directory to recognize all of the 512MB RAM. I did a "free" command, and sure enough saw only 240MB. Sadness. But, Gary gave me a copy of his start.elf which worked after some trail and error. For anyone ordering the new RPi Model B w/512MB, here are the steps to get you going with full 512MB RAM: sudo apt-get update --fix-missing sudo apt-get upgrade --fix-missing # NOTE: This step takes at least a couple hours on a # fast network wget https://raw.github.com/raspberrypi/firmware/\ 164b0fe2b3b56081c7510df93bc1440aebe45f7e/boot/\ arm496_start.elf sudo mv /boot/start.elf /boot/orig-start.elf sudo mv arm496_start.elf /boot/start.elf sudo reboot free total used free shared buffers cached Mem: 497768 210596 287172 0 16892 169624 -/+ buffers/cache: 24080 473688 Swap: 102396 0 102396 So of course this means... (drumroll) there is now 498MB available for the Java Embedded heap! java -Xmx400m -version java version "1.7.0_06" Java(TM) SE Embedded Runtime Environment (build 1.7.0_06-b24, headless) Java HotSpot(TM) Embedded Client VM (build 23.2-b09, mixed mode) Yeah, baby! Hinkmond

    Read the article

  • ubuntu Grub boot hangs on external usb drive

    - by schoetbi
    Hi, i just gave xubuntu another try and installed it ordinarily on a external usb harddisk. I have another harddisk installed inside the laptop that has Windows Xp on it. Now the problem: When I boot from the external drive the boot menu of Grub 2 shows up and i see all installed bootable partitions including windows. Now I select Xubuntu and wait. When the Xubuntu Logo shows up the boot process hangs. Now the funny thing. When the logo shows up I can unplug the usb drive and reconnect it real fast. Then the boot process will continue!!! Since I am a Linux newbie I would appreciate every hint that can solve this so that I can enjoy a smooth Linux boot:-) EDIT Grub version is: tobias@ubuntu:~$ grub-install -v grub-install (GRUB) 1.98+20100804-5ubuntu3 Kernel is: tobias@ubuntu:~$ uname -r 2.6.35-23-generic Xubuntu is 9.10 Thanks

    Read the article

  • How do I throttle a command in a terminal window?

    - by To Do
    I needed to run convert with a lot of images at the same time. The command took quite a while but this doesn't bother me. The issue is that this command rendered my computer unusable while the command was running (for about 15 minutes). So is it possible to throttle the command by limiting resources (processor and memory) to the command, directly from the command line? This can only work if I add something to the same line before pressing Enter because once I start the process the computer slows so much that it is impossible for example to switch to "System monitor" and reduce priority. Edit: top and iotop results I managed to run top and sudo iotop >iotop.txt while doing one of these convert operations. (The iotop.txt file produced is difficult to read) Results of top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14275 username 20 0 4043m 3.0g 1448 D 7.0 80.4 0:16.45 convert Results of iotop: [?1049h[1;24r(B[m[4l[?7h[?1h=[39;49m[?25l[39;49m(B[m[H[2JTotal DISK READ: 1269.04 K/s | Total DISK WRITE:[59G0.00 B/s (B[0;7m TID PRIO USER DISK READ DISK WRITE SWAPIN(B[0;1;7m IO(B[0;7m COMMAND [3;2H(B[m2516 be/4 username 350.08 K/s 0.00 B/s 0.00 % 0.00 % zeitgeist-datahub 7394 be/4 username 568.88 K/s 0.00 B/s 77.41 % 0.00 % --rendere~.530483991[5;1H14275 idle username 350.08 K/s 0.00 B/s 37.49 % 0.00 % convert S~f test.pdf[6;2H2048 be/4 root[6;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:2] [5G1 be/4 root[7;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % init Furthermore, even after the process ends, the computer does not return to the previous performance. I found a way around this by running sudo swapoff -a followed by sudo swapon -a

    Read the article

  • IIS6 + PHP + FastCGI 500 Errors - Where to start looking?

    - by Bertvan
    I've set up IIS6 with FastCGI to use php-cgi.exe. I have some php websites by external parties, that I'm trying to run in a test environment. One of the websites just plain gives me a FastCGI Error Page. Question: Is there some way to enable logging somewhere so that I can get a bit more information on this problem? I have looked in Eventlog IIS Website log (c:\windows\system32\Logfiles) PHP log But no results, except the IIS Website log mentions a return of a 500 page. Question: Is there any other way to debug/check where things might be going wrong? Here is what the page looks like: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: The FastCGI process exited unexpectedly Error Number: -1073741571 (0xc00000fd). Error Description: Unknown Error HTTP Error 500 - Server Error. Internet Information Services (IIS)

    Read the article

  • Extract first page from multiple pdfs

    - by Tim Alexander
    Have got about 500 PDFs to go through and extract the first page of. They then need to go through some time consuming conversion process so was hoping to try and save some time by have a batch process to extract just the first page from the 500 pdfs and place it in a new pdf. Have had a poke around Acrobat but can find no real method of doing this for multiple files. Does anyone know any other programs or methods that this could be achieved? Free and open source are obviously more favourable :) EDIT: Have actually had some success using GhostScript to extract just one page. Am now looking at how to batch that and take the list of files and use those.

    Read the article

  • Is there a RAR extractor (for multiple rar files like .r00 etc.) that will use all of my quad cores?

    - by Christopher Done
    I've got a quad core Intel processor. I've got a big file split into little ones as RAR files, foo.r00, foo.r01, etc. which the RAR program extracts into one file/directory. Is there a RAR program that I can specify like "use four cores" in the extract process? At the moment it sits there using 100% of one core. I recognise the bottleneck might be my hard drive anyway, but I don't see a lot of HD usage and suspect the decompression process is more intensive than waiting on I/O. For example, GNU Make accepts a (-j, I think) argument to tell it how many cores to use, which I used to compile PHP 6 really quickly.

    Read the article

  • How to find spyware dll launched using svchost.exe

    - by Sheen
    This weekend I found my PC was possibly infected by some virus or spyware. There is one "svchost.exe -k netsvcs" in my task manager, and it is running under my user name, rather than SYSTEM accounts. There is already another same process with same command line options under SYSTEM account. This user account svchost.exe consistently consumes 50% CPU (1 of 2 cores of my CPU). In Process Explorer, I can see it is started by explorer.exe, instead of services.exe. However, I failed to find its real service dll place in registry or disk. Does anyone know how to find this malicious program?

    Read the article

  • Why does the CPU usage reach 100% when laptop is plugged in?

    - by Mayank
    I have an HP Elitebook 8440p with Windows 7 Ultimate, i7 Processor and 8 GB RAM. For the last couple of days, when I plug-in my laptop for charging, the CPU usage shoots up to 100% and remains there for around 15 minutes. No offending process shows up in Task Manager/Process Explorer but the laptop is practically unusable for that time. The CPU usage becomes normal when I remove the power cord but shoots up again when plugged back. This is happening with different power adapters/wall sockets so it might not be a faulty adapter issue. Why is this happening and any pointer to resolve this issue?

    Read the article

  • Window 2003 Server - Logon Failure error message in Event Viewer

    - by user45192
    Hi guys, I received alot of event logged in the event viewer with this message. I notice is always the same user id which encounters this error. The user id is use by an application to access the database. However, this account does not exits on this server. How do I trace the services/program use by this user id which causes these error messages? Reason=Unknown user name or bad password&&User Name=&&Domain=&&Logon Type=3&&Logon Process=NtLmSsp&&Authentication Package=NTLM&&Workstation Name=&&Caller User Name=-&&Caller Domain=-&&Caller Logon ID=-&&Caller Process ID=-&&Transited Services=-&&Source Network Address=-&&Source Port=-&&User=SYSTEM&&ComputerName=

    Read the article

  • Geekswithblogs.net | Screen Resolutions of our Readers

    - by Jeff Julian
    Yesterday I talked about the Browsers we see being used by our readers driven off of our Google Analytics traffic and today I want to share with you the Screen Resolutions we see.  As a web developer most of my life, it is hard to decide how large you should build your application because typically you have a couple huge high resolution monitors on your desk, but you typical end user is thought to have 1024x768.  With HTML5/CSS3 out, it is a little better coming up with a design that will scale to all resolutions, but it is still nice to know the numbers when it comes to how much real estate do I have on my clients. If you look at these numbers for Geekswithblogs.net, we have a lot of high resolution monitors from users that visit the site.  After a little more investigation of the number you will notice we do not have as much height available as we do width.  If the primary goal of a site is to deliver as much data in the viewable area without scrolling, this becomes a challenge when most of our pages have long pieces of formatted data.  So our challenge is to build skins that use up more of the sides of the content toward the top on larger resolution browsers and then entice the reader to scroll to get the goodies embedded in the content of the posts.  Going to be an interesting battle for sure, but we really need more skin offerings on the site. Technorati Tags: Resolution Statistics,Geekswithblogs.net

    Read the article

  • StyleCop Custom Rules

    - by Aligned
    There are several blogs on how to do this (http://scottwhite.blogspot.com/2008/11/creating-custom-stylecop-rules-in-c.html, etc). I’ve found a few useful things to point out: Debugging is difficult, but here are the steps (thanks to Tintin’s answer). “One way: 1) Delete your custom rules 2) Open Visual Studio (for dev), open your custom rule solution 3) Build & Deploy custom rules (a PostBuild action to copy the rules into the StyleCop folder is handy) 4) Open Visual Studio (for test) 5) Use VS (dev) and Attach to process devenv.exe (the test VS instance), set breakpoints in the rules you want to debug 6) Use VS’ (test) and right-click on project, Run StyleCop 7) Debug” ~ it worked once, now I’m having problems getting it to work again ~ I also get the message “Cannot evaluate expression because the code of the current method is optimized.” when I try to look at properties. Looking at the source code of the StyleCop.CSharp.Rules.dll that comes with the install. I used JustDecompile from Telerik. Create one xml file and name it the same as the one cs file (CodingGuildelineRules.cs and CodingGuidelinRules.xml) Deploy: 1. Build in Visual Studio 2. Close Visual Studio (Style cop is running so you can’t override your dll without closing) 3. Copy the dll from the bin to the C: \Program Files (x86)\StyleCop 4.7\ 4. Open the settings file or re-open Visual Studio

    Read the article

  • rdiff-backup failed due to target machine being down, but is unkillable

    - by Markus
    My backup script was invoked by cron, using rdiff-backup to backup the user files onto a target system in the network. That target computer went down at some point, yet still appeared as mounted on the server. rdiff-backup didn't do anything, but still appears as a process. kill-ing doesn't stop it. Similarly, running rdiff-backup for other directories works but doesn't exit properly and remains in the process list. Is there anything short of rebooting the server that I can try?

    Read the article

  • Running a Screen instance of a program as non-root

    - by user288467
    I've got a dedicated server (Ubuntu 12.04, no GUI) set up to launch an instance of McMyAdmin and attach it to a screen instance every time I reboot the hardware. I have the command saved to root's crontab as: @reboot cd /var/MC_SVR && screen -dmS McMyAdmin ./MCMA2_Linux_x86_64 Problem being, though, I have a user set up specifically for FTP access to the server files so I don't always have to SSH into the machine. Since the server is being started as a root process, all the files it makes are, obviously, set with root as the owner. So I chown'd all the files and set them to ftpuser. Now I'm stuck with trying to get the process to start as ftpuser. I've tried doing the following but to no avail: cd /var/MC_SVR && su ftpuser - -c 'screen -dmS McMyAdmin ./MCMA2_Linux_x86_64' I try this in terminal and I get no errors or anything (in fact I never get anything unless it's a syntax error from su), but there's no screen instance to access and so I can assume the server never starts. So, what am I doing wrong? Or am I just not accessing the screen instance correctly since it's (supposed) to be launched by another user?

    Read the article

  • mongod fork vs nohup

    - by Daniel Kitachewsky
    I'm currently writing process management software. One package we use is mongo. Is there any difference between launching mongo with mongod --fork --logpath=/my/path/mongo.log and nohup mongod >> /my/path/mongo.log 2>&1 < /dev/null & ? My first thought was that --fork could spawn more processes and/or threads, and I was suggested that --fork could be useful for changing the effective user (downgrading privileges). But we run all under the same user (process manager and mongod), so is there any other difference? Thank you

    Read the article

  • The balance between client and server functionality

    - by Eugen Martynov
    I want to bring the discussion that started in our teams and get your opinion about it. Assume we have an user account which could have different credentials for authentication and associated email to recover. An user has possibility to do signup with an email or use his social profile to complete signup process. As an Rest API from the backend to client looks like: Create account Authorise Update user data Link social account Register email Verify email In addition our BE is distributed and divided between several services/servers/clusters. So different calls are related to different end points. In case of the social sign up some of steps should be skipped or simplified. For example, with Facebook signup we could already skip email registration and verification step (we ask email permission form user), linking the social account and pre-fill user displayed name. So we proposed to have another end point which will hide/combine different calls on BE and return whole process result to the clients. The pros for this approach: No more duplication of functionality between clients Speed up the networking and user experience The cons for this approach: Additional work for backend Probably most complex scenarios in future updates I would like to get your opinion or experience with this situation. Especially if you already experienced point #2 from against reasons.

    Read the article

  • Microsoft Changes Developer Account Registration Requirements

    - by Tim Murphy
    Over the last couple of weeks I have noticed that Microsoft seems to have changed the requirements for Corporate accounts.  These requirements were not in effect when I originally setup the account for the company that I work for.  We also recently had our corporate account canceled without explanation and are in the process of working to get it reinstated.  This all seems to revolve around rules to increase confidence that in the producers of content.  They are now having Symantec validate a company based on legal documents. In the past there have been problems with getting credit cards accepted.  We have had to setup new Live IDs to satisfy whatever glitch the system had or unexplainable requirement.  I am hoping that in the time that has elapsed these problems have been resolved. In truth I can’t say that these new requirements weren’t always in place, but it is getting frustrating to help clients setup accounts.  I am encourage that they have taken steps to safeguard the consumer from Joe-fly-by-night, but they also need to make sure that the process doesn’t become so complex that it drives away companies from participating in the store.  We will have to keep an eye on this as things evolve. del.icio.us Tags: Windows Phone Development,Windows 8 Development,Windows Phone,Windows 8,Registration,Corporate Accounts

    Read the article

  • Code Clone Analysis on Rawr &ndash; Part 1

    - by Dylan Smith
    In this post we’ll take a look at the first result from the Code Clone Analysis, and do some refactoring to eliminate the duplication.  The first result indicated that it found an exact match repeated 14 times across the solution, with 18 lines of duplicated code in each of the 14 blocks.   Net Lines Of Code Deleted: 179     In this case the code in question was a bunch of classes representing the various Bosses.  Every Boss class has a constructor that initializes a whole bunch of properties of that boss, however, for most bosses a lot of these are simply set to 0’s.     Every Boss class inherits from the class MultiDiffBoss, so I simply moved all the initialization of the various properties to the base class constructor, and left it up to the Boss subclasses to only set those that are different than the default values. In this case there are actually 22 Boss subclasses, however, due to some inconsistencies in the code structure Code Clone only identified 14 of them as identical blocks.  Since I was in there refactoring the 14 identified already, it was pretty straightforward to identify the other 8 subclasses that had the same duplicated behavior and refactor those also.   Note: Code Clone Analysis is pretty slow right now.  It takes approx 1 min to build this solution, but it takes 9 mins to run Code Clone Analysis.  Personally, if the results are high quality I’m OK with it taking a long time to run since I don’t expect it’s something I would be running all that often.  However, it would be nice to be able to run it as part of a nightly build, but at this time I don’t believe it’s possible to run outside of Visual Studio due to a dependency on the meta-data available in the VS environment.

    Read the article

  • Mapping Your Customer Experience Journey

    - by Michael Hylton
    For those who attended today’s Oracle Customer Experience Summit keynote you heard from Brian Curran talk about the strategies and best practices to implement customer experience (CX) in your organization.  He spoke about how this evolving journey begins by understanding six steps to transform your business and put your customers front and center.  Here are those key six steps: What are the strategic business objectives in your company? What are your operational objectives and KPIs necessary to measure a CX project? Build an income statement and create “what if” scenarios and see how changes impact your business’ bottom line.  Explore what keeps you from getting to your own goals for your business. Define the business objectives and opportunities you want to meet? Understand the trends and accelerators in the market?  What factors are going on in the market affect that impact your business?  Social?  Mobile?  Cloud?  Just to name a few.  Many of these trends may signal a change in the way people think about your business. What approach will you take to solve these issues?  Understand who your customer is.  How do you need to adapt your business to build relevant, personalized customer experiences. What technologies can you implement to address CX?  Does technology help you solve your problem? A great way to begin your customer experience journey is a concept called journey mapping, one of the most powerful and deceptively simple tools for unlocking CX innovation at your organization. Here is where you can learn more about how you can bring this concept into your business to drive great customer experiences.

    Read the article

  • How do I find the source of soft page faults?

    - by David Robison
    I have Windows 7 x64 computer that according to Performance Monitor has 70,000 page faults / second when idling. That's seems like a lot to me (every other computer I check has basically 0 page faults / second when idling). If I use Resource Monitor or Process Explorer to check hard faults, I see that they are basically 0. So all the page faults are soft. Normally, soft page faults are not a problem, but I suspect they might be causing issues for this computer given there are so many. I would like to identify what programs are causing the soft faults. Are there any tools that exist the display the number of soft page faults for each process?

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >