University of Cyprus


IEEEXtreme 13.0: Registration is open!

IEEEXtreme 13.0: Registration is open!

Dear student,
Registration for the IEEEXtreme Programming Competition is now open!

What is the IEEEXtreme?

IEEEXtreme is a global challenge in which teams of IEEE Student members – advised and proctored by an IEEE member, and often supported by an IEEE Student Branch – compete in a 24-hour time span against each other to solve a set of programming problems.

Who can compete?

Teams of up to three collegiate students who are current IEEE student members.
There is no limit for local colleges and universities as they can form multiple teams.

What could I win?

  • Fame: Unlimited bragging rights and an item for your resume.
  • Fortune: Among other great prizes, the Grand Prize is a trip to the IEEE conference of your choice, anywhere in the world.

All active participants in the competition will receive a digital certificate and a digital gift bundle.

How about the date and time?

IEEEXtreme 13.0 will take place on October 19, 2019. It will start on 00:00 UTC for all contestants around the globe and it will end 24 hours later.

Since it is a global competition, where is the location?

As IEEEXtreme is a virtual online competition, a physical location, or venue, must be identified for participants to use during the 24-hour competition.

Venues can be in an IEEE Student Branch office, a college lab, or another location on campus. It must be a place that participants can use for the 24 hours during the competition, it should be equipped with at least one computer, and some type of connection to the internet must be provided.

I’ve heard the contest is pretty difficult. I’m in my first year of university and don’t think I’m good enough. Should I participate?

Yes. The competition is all about the experience. IEEEXtreme is a lot of fun, and will help you face real-world problems that you may not see during college. Furthermore, the competition includes questions from various difficulties, from easy/novice to expert levels.

Get your team together early and register today. Together you can prepare for the competition by visiting our Practice Community.

In 2018 the IEEEXtreme hosted 9,500 participants from 76 countries!
Represent your school and your country in this year’s competition. Help us break the 10,000 participants benchmark and

REGISTER NOW!

Good luck!
The IEEEXtreme Team

For more information and to connect with the IEEEXtreme Team visit:

 Void where prohibited. Those residing in OFAC-embargoed countries may compete, but are not eligible for monetary awards.

Information on the upcoming IEEEXtreme programming competition
Advertisements

Unmanned Aerial Vehicles – Innovation and Challenges

The IEEE and the Cyprus Computer Society present the state of art of drone technologies and applications.

3 October 2017
University of Cyprus
Building KOD 07, Room 10
Starts at 17.00
Free Food and Drinks

Presentations:

Flyer (153 downloads)

Program (153 downloads)

  • Presentations on the state of art of drone technologies & applications
  • Drone Piloting
  • Prize draw for IEEE student members:
    One-day piloting course worth €250, by DJI Cyprus

Program

  • 17:00-17:25: “UAV in Emergency Response: Research and Innovation Challenges”
    Dr. Panayiotis Kolios, KIOS Research and Innovation Center of Excellence
  • 17:25-18:15: “Demonstration of UAV automated functionalities”
    Mr. Petros Petrides, KIOS Research and Innovation Center of Excellence
  • 18:15-18:30: Coffee break
  • 18:30-18:50: “Regulations on drone aviation in Cyprus”
    Mr. Marios Louka, DJI Cyprus
  • 18:50-19:10: “Monitoring Power Systems Lines using Drones”
    Mr. Costas Stasopoulos, EAC, IEEE Region 8 Past-Director
  • 19:10-19:15: “Contribution of Cyprus Computer Society (CCS) in Cyprus”
    Mr. Costas Agrotis, CCS Chairman
  • 19:15-19.30: “Introduction of IEEE: Its Vision and Role”
    Mr. Nicos Michaelides, CYTA, IEEE Cyprus Section Chair
  • 19:30: Drinks and snacks @ U-Pub – Prize draw for IEEE student members

Flyer (153 downloads)

Program (153 downloads)


Really rough notes on compiling source code on Fedora 25 for STM32F767 Nucleo-144 (Nucleo-F767ZI)

#eclipse with support for C/C++
sudo dnf install -y eclipse-cdt;
#cross-compiler for arm
sudo dnf install -y arm-none-eabi-gcc arm-none-eabi-gdb arm-none-eabi-binutils arm-none-eabi-newlib arm-none-eabi-gcc-cs-c++;
#manually installing openocd from the repository as the version in the repositories does not support our board (STM32F767 Nucleo-144 (Nucleo-F767ZI))
git clone http://openocd.zylin.com/openocd;
cd openocd/;
./bootstrap;
./configure;
make;
sudo make install;

#download eclipse plugin from https://my.st.com/content/my_st_com/en/products/development-tools/software-development-tools/stm32-software-development-tools/stm32-configurators-and-code-generators/stsw-stm32095.license%3d1491636351998.html
#install using from menu “Help” > “Install New Software…” > “Add…” > “Archive…”. Find “en.stsw-stm32095.zip” and press OK. Tick new repo and click next.

#add http://gnuarmeclipse.sourceforge.net/updates as a repository in eclipse.  menu “Help” > “Install New Software…” > “Add…”. Type name “GNU arm eclipse” and type address “http://gnuarmeclipse.sourceforge.net/updates”. Press ok. Tick new repo and click next.

# st_nucleo_f7.cfg copy it with the rest of the configuration files e.g. /usr/local/share/openocd/scripts/board/
sudo cp st_nucleo_f7.cfg /usr/local/share/openocd/scripts/board/

Create a new st 7x project and add 2048 of memory

create a C/C++ run application run

create new openosd run to run the elf created by above run and add parameter

-f /usr/local/share/openocd/scripts/board/st_nucleo_f7.cfg

to config options in debugger tab

sudo usermod -a -G root george;
#if you get error on opening the usb device (really ugly hack)

 

Needed packages:

  • sudo dnf install -y arm-none-eabi-gcc arm-none-eabi-gdb arm-none-eabi-binutils arm-none-eabi-newlib
  • Do not install openocd from the repositories, clone the git server as it has a later version which supports our board.
    git clone http://openocd.zylin.com/openocd
    then build it

stm32f7x.cfg (compressed) (189 downloads) copy it where you have the rest of the target files
e.g. /usr/share/openocd/scripts/board/st_nucleo_f7.cfg

st_nucleo_f7.cfg (compressed) (177 downloads)   copy it with the rest of the configuration files
e.g. /usr/share/openocd/scripts/target/stm32f7x.cfg

The locations for the above files depend on your configuration

You need to download the STM32CubeF7 (https://my.st.com/content/my_st_com/en/products/embedded-software/mcus-embedded-software/stm32-embedded-software/stm32cube-embedded-software/stm32cubef7.license%3d1487716364634.html) ~634MB
Extract it.

Navigate to a ready project like the GPIO_IOToggle in STM32Cube_FW_F7_V1.6.0/Projects/STM32F767ZI-Nucleo/Examples/GPIO/GPIO_IOToggle

Compile each .c file using the following command, but fix the paths !!! You also might need ton include the Inc directory of the project
e.g.
arm-none-eabi-gcc -Wall -mcpu=cortex-m7 -mlittle-endian -mthumb -ISTM32Cube_FW_F7_V1.6.0/Drivers/CMSIS/Device/ST/STM32F7xx/Include -ISTM32Cube_FW_F7_V1.6.0/Drivers/CMSIS/Include -ISTM32Cube_FW_F7_V1.6.0/Drivers/STM32F7xx_HAL_Driver/Inc -I. -ISTM32Cube_FW_F7_V1.6.0/Drivers/BSP/STM32F7xx_Nucleo_144 -DSTM32F767xx -Os -c system_stm32f7xx.c -o system_stm32f7xx.o

Merge all .o files into an .elf file

arm-none-eabi-gcc -mcpu=cortex-m7 -mlittle-endian -mthumb -DSTM32F767xx -TSTM32Cube_FW_F7_V1.6.0/Projects/STM32F767ZI-Nucleo/Templates/SW4STM32/STM32F767ZI_Nucleo_AXIM_FLASH/STM32F767ZITx_FLASH.ld -Wl,–gc-sections system_stm32f7xx.o main.o stm32f7xx_it.o -o main.elf

Convert the .elf file to a .hex

arm-none-eabi-objcopy -Oihex main.elf main.hex
Start openocd to attach to the board

sudo ../src/openocd -f /usr/share/openocd/scripts/board/st_nucleo_f7.cfg

Use telnet to control the board

telnet localhost 4444

Flash the board

reset halt
flash write_image erase /home/xeirwn/Downloads/ST/GPIO_IOToggle/Src/main.hex
reset run

 

DONE

 

 

sudo dnf install eclipse-cdt-sdk;

download plugin from here https://my.st.com/content/my_st_com/en/products/development-tools/software-development-tools/stm32-software-development-tools/stm32-configurators-and-code-generators/stsw-stm32095.license%3d1491636351998.html

 

add http://gnuarmeclipse.sourceforge.net/updates as a repository in eclipse

sudo dnf install -y arm-none-eabi-gcc-cs-c++;

create new openosd run and add parameter

-f /usr/share/openocd/scripts/board/st_nucleo_f7.cfg

to config options in debugger tab

add   RCC_OscInitStruct.PLL.PLLR = 7; to _initialize_hardware.c

 

I hope I did not forget anything

Anyhow, this post will be updated soon

 


ECE795 – Study Notes

1. INTRODUCTION

Page 2

Machine learning algorithms must be trained using a large set of known data and then tested using another independent set before it is used on unknown data.

The result of running a machine learning algorithm can be expressed as a
function y(x) which takes a new x as input and that generates an output vector y, encoded in the same way as the target vectors. The precise form of the function y(x) is determined during the training phase, also known as the learning phase, on the basis of the training data. Once the model is trained it can then determine the identity of new elements, which are said to comprise a test set. The ability to categorize correctly new examples that differ from those used for training is known as generalization. In practical applications, the variability of the input vectors will be such that the training data can comprise only a tiny fraction of all possible input vectors, and so generalization is a central goal in pattern recognition.

The pre-processing stage is sometimes also called feature extraction. Note that new test data must be pre-processed using the same steps as the training data. The aim is to find useful features that are fast to compute, and yet that also preserve useful discriminatory information. Care must be taken during pre-processing because often information is discarded, and if this information is important to the solution of the problem then the overall accuracy of the system can suffer.

Page 3

Applications in which the training data comprises examples of the input vectors along with their corresponding target vectors are known as supervised learning problems. Cases such as the digit recognition example, in which the aim is to assign each input vector to one of a finite number of discrete categories, are called classification problems. If the desired output consists of one or more continuous variables, then the task is called regression. An example of a regression problem would be the prediction of the yield in a chemical manufacturing process in which the inputs consist of the concentrations of reactants, the temperature, and the pressure.

In other pattern recognition problems, the training data consists of a set of input
vectors x without any corresponding target values. The goal in such unsupervised learning problems may be to discover groups of similar examples within the data, where it is called clustering, or to determine the distribution of data within the input space, known as density estimation, or to project the data from a high-dimensional space down to two or three dimensions for the purpose of visualization.

Finally, the technique of reinforcement learning (Sutton and Barto, 1998) is concerned with the problem of finding suitable actions to take in a given situation in order to maximize a reward. Here the learning algorithm is not given examples of optimal outputs, in contrast to supervised learning, but must instead discover them by a process of trial and error. Typically there is a sequence of states and actions in which the learning algorithm is interacting with its environment. In many cases, the current action not only affects the immediate reward but also has an impact on the reward at all subsequent time steps.

The reward must then be attributed appropriately to all of the moves that led to it, even though some moves will have been good ones and others less so. This is an example of a credit assignment problem. A general feature of reinforcement learning is the trade-off between exploration, in which the system tries out new kinds of actions to see how effective they are, and exploitation, in which
the system makes use of actions that are known to yield a high reward.

1.1 Example: Polynomial Curve Fitting

Page 5

We fit the data using a polynomial function of the form:

y(x, w) = w_{0}x^{0} + w_{1}x^{1} + w_{2}x^{2} + . . . + w_{M}x^{M} = \sum_{j =0}^{M}w_{j}x^{j}

where M is the order of the polynomial.

The values of the coefficients will be determined by fitting the polynomial to the
training data. This can be done by minimizing an error function that measures the misfit between the function y(x, w), for any given value of w, and the training set data points.

Our error function: Sum of the squares of the errors between the predictions y(x_{n} , w) for each data point x_n and the corresponding target values t_n.

E(w) = \frac{1}{2}\sum_{n=1}^{N}\{y(x_{n},w) - t_{n}\}^2

Much higher order polynomial can cause Over-Fitting : the fitted curve oscillates wildly and gives a very poor representation of the function.

We can obtain some quantitative insight into the dependence of the generalization performance on M by considering a separate test set comprising 100 data points generated using exactly the same procedure used to generate the training set points but with new choices for the random noise values included in the target values.

Root Mean Square: E_{RMS} = \sqrt{2E(w*)/N}

The division by N allows us to compare different sizes of data sets on an equal footing, and the square root ensures that E_{RMS} is measured on the same scale (and in the same units) as the target variable t.

 

OTHER

To study

Curve Fitting

Additional Terms

Legend:

  • TP = True Positive
  • TN = True Negative
  • FP = False Positive
  • FN = False Negative

correct\ rate (accuracy) = \frac{TP + TN}{TP + TN + FP + FN}

sensitivity = \frac{TP}{TP + FN}

specificity = \frac{TN}{TN + FP}

Receiver Operating Characteristic

In statistics, a receiver operating characteristic (ROC), or ROC curve, is a graphical plot that illustrates the performance of a binary classifier system as its discrimination threshold is varied. The curve is created by plotting the true positive rate (sensitivity) against the false positive rate (1  – specificity) at various threshold settings.
— From https://en.wikipedia.org/wiki/Receiver_operating_characteristic

receiver-operating-characteristic-roc-sensitivity-and-1-specificityAssuming we have a system where changing its configuration we get the above results, we would pick the configuration that has the smallest Euclidean Distance from the perfect configuration. The perfect configuration can be found at point (0,1) where both Specificity and Sensitivity are both equal to one.

Chapters to ignore

Non – Parametric


1st Workshop on Conformal Prediction and its Applications (CΟPA 2012)

==============================
1st Workshop on Conformal Prediction and its Applications (CΟPA 2012) to be held in conjunction with the 8th IFIP Conference on Artificial Intelligence Applications & Innovations (AIAI 2012)
Halkidiki, Greece, September 27-30, 2012
http://delab.csd.auth.gr/aiai2012/
===========================

Workshop Theme:
================

Quantifying the uncertainty of the predictions produced by classification and regression techniques is an important problem in the field of Machine Learning. Conformal Prediction is a recently developed framework for complementing the predictions of Machine Learning algorithms with reliable measures of confidence. The methods developed based on this framework produce well-calibrated confidence measures for individual examples without assuming anything more than that the data are generated independently by the same probability distribution (i.i.d.). Since its development the framework has been combined  with many popular techniques, such as Support Vector Machines, k-Nearest Neighbours, Neural Networks, Ridge Regression etc., and has been successfully applied to many challenging real world problems, such as the early detection of ovarian cancer, the classification of leukaemia subtypes, the diagnosis of acute abdominal pain, the assessment of stroke risk, the recognition of hypoxia in electroencephalograms (EEGs), the prediction of plant promoters, the prediction of network traffic demand, the estimation of effort for software projects and the backcalculation of non-linear pavement layer moduli. The framework has also been extended to additional problem settings such as  semi-supervised learning, anomaly detection, feature selection, outlier detection, change detection in streams and active learning. The aim of this workshop is to serve as a forum for the presentation of new and ongoing work and the exchange of ideas between  researchers on any aspect of Conformal Prediction and its applications.

The workshop welcomes submissions introducing further developments and extensions of the Conformal Prediction framework and describing its application to interesting problems of any field.

Submission
==========
Authors are invited to submit original, English-language research contributions or experience reports. Papers should be no longer than 10 pages formatted according to the well-known LNCS Springer style. Papers should be submitted either in a doc or in a pdf form to: [email protected]

Publication
===========
Submitted papers will be refereed for quality, correctness, originality, and relevance. Notification and reviews will be communicated via email. Accepted papers will be presented at the workshop and published in the Proceedings of the main event (by Springer). They will also be considered for potential publication in the Special Issues of the
Conference.

Important Dates
===============
Full paper submission due: April 29, 2012
Notification of acceptance: May 26, 2012
Camera-ready paper submission: June 4, 2012

Honorary Chairs
===============
Vladimir Vapnik NEC, USA & Royal Holloway, University of London, UK
Alexei Chervonenkis Russian Academy of Sciences, Russia & Royal Holloway, University  of London, UK

Program Chairs
==============
Harris Papadopoulos
Frederick University, Cyprus
Email: [email protected]

Alex Gammerman
Royal Holloway, University of London, UK
Email: [email protected]

Vladimir Vovk
Royal Holloway, University of London, UK
Email: [email protected]

Program Committee
=================
Vineeth Balasubramanian, Arizona State University, USA
Anthony Bellotti, Imperial College London, UK
David R. Hardoon, SAS Singapore
Mohamed Hebiri, Universite de Marne-la-Vallee, France
Shen-Shyang Ho, Nanyang Technological University, Singapore
Zakria Hussain, University College London, UK
Yuri Kalnishkan, Royal Holloway, University of London, UK
Matjaz Kukar, University of Ljubljana, Slovenia
Antonis Lambrou, Royal Holloway, University of London, UK
Rikard Laxhammar, University of Skovde, Sweden
Yang Li, Chinese Academy of Sciences, China
Zhiyuan Luo, Royal Holloway, University of London, UK
Andrea Murari, Consorzio RFX, Italy
Ilia Nouretdinov, Royal Holloway, University of London, UK
Savvas Pericleous, Frederick University, Cyprus
David Surkov, Egham Capital, UK
Jesus Vega, Asociacion EURATOM/CIEMAT para Fusion, Spain
Fan Yang, Xiamen University, China