University of Cyprus


IEEEXtreme 13.0: Registration is open!

IEEEXtreme 13.0: Registration is open!

Dear student,
Registration for the IEEEXtreme Programming Competition is now open!

What is the IEEEXtreme?

IEEEXtreme is a global challenge in which teams of IEEE Student members – advised and proctored by an IEEE member, and often supported by an IEEE Student Branch – compete in a 24-hour time span against each other to solve a set of programming problems.

Who can compete?

Teams of up to three collegiate students who are current IEEE student members.
There is no limit for local colleges and universities as they can form multiple teams.

What could I win?

  • Fame: Unlimited bragging rights and an item for your resume.
  • Fortune: Among other great prizes, the Grand Prize is a trip to the IEEE conference of your choice, anywhere in the world.

All active participants in the competition will receive a digital certificate and a digital gift bundle.

How about the date and time?

IEEEXtreme 13.0 will take place on October 19, 2019. It will start on 00:00 UTC for all contestants around the globe and it will end 24 hours later.

Since it is a global competition, where is the location?

As IEEEXtreme is a virtual online competition, a physical location, or venue, must be identified for participants to use during the 24-hour competition.

Venues can be in an IEEE Student Branch office, a college lab, or another location on campus. It must be a place that participants can use for the 24 hours during the competition, it should be equipped with at least one computer, and some type of connection to the internet must be provided.

I’ve heard the contest is pretty difficult. I’m in my first year of university and don’t think I’m good enough. Should I participate?

Yes. The competition is all about the experience. IEEEXtreme is a lot of fun, and will help you face real-world problems that you may not see during college. Furthermore, the competition includes questions from various difficulties, from easy/novice to expert levels.

Get your team together early and register today. Together you can prepare for the competition by visiting our Practice Community.

In 2018 the IEEEXtreme hosted 9,500 participants from 76 countries!
Represent your school and your country in this year’s competition. Help us break the 10,000 participants benchmark and

REGISTER NOW!

Good luck!
The IEEEXtreme Team

For more information and to connect with the IEEEXtreme Team visit:

 Void where prohibited. Those residing in OFAC-embargoed countries may compete, but are not eligible for monetary awards.

Information on the upcoming IEEEXtreme programming competition

Unmanned Aerial Vehicles – Innovation and Challenges

The IEEE and the Cyprus Computer Society present the state of art of drone technologies and applications.

3 October 2017
University of Cyprus
Building KOD 07, Room 10
Starts at 17.00
Free Food and Drinks

Presentations:

  • [download id=”3817″]
  • [download id=”3824″]

[download id=”3742″]

[download id=”3743″]

  • Presentations on the state of art of drone technologies & applications
  • Drone Piloting
  • Prize draw for IEEE student members:
    One-day piloting course worth €250, by DJI Cyprus

Program

  • 17:00-17:25: “UAV in Emergency Response: Research and Innovation Challenges”
    Dr. Panayiotis Kolios, KIOS Research and Innovation Center of Excellence
  • 17:25-18:15: “Demonstration of UAV automated functionalities”
    Mr. Petros Petrides, KIOS Research and Innovation Center of Excellence
  • 18:15-18:30: Coffee break
  • 18:30-18:50: “Regulations on drone aviation in Cyprus”
    Mr. Marios Louka, DJI Cyprus
  • 18:50-19:10: “Monitoring Power Systems Lines using Drones”
    Mr. Costas Stasopoulos, EAC, IEEE Region 8 Past-Director
  • 19:10-19:15: “Contribution of Cyprus Computer Society (CCS) in Cyprus”
    Mr. Costas Agrotis, CCS Chairman
  • 19:15-19.30: “Introduction of IEEE: Its Vision and Role”
    Mr. Nicos Michaelides, CYTA, IEEE Cyprus Section Chair
  • 19:30: Drinks and snacks @ U-Pub – Prize draw for IEEE student members

[download id=”3742″]

[download id=”3743″]


Really rough notes on compiling source code on Fedora 25 for STM32F767 Nucleo-144 (Nucleo-F767ZI)

#eclipse with support for C/C++
sudo dnf install -y eclipse-cdt;
#cross-compiler for arm
sudo dnf install -y arm-none-eabi-gcc arm-none-eabi-gdb arm-none-eabi-binutils arm-none-eabi-newlib arm-none-eabi-gcc-cs-c++;
#manually installing openocd from the repository as the version in the repositories does not support our board (STM32F767 Nucleo-144 (Nucleo-F767ZI))
git clone http://openocd.zylin.com/openocd;
cd openocd/;
./bootstrap;
./configure;
make;
sudo make install;

#download eclipse plugin from https://my.st.com/content/my_st_com/en/products/development-tools/software-development-tools/stm32-software-development-tools/stm32-configurators-and-code-generators/stsw-stm32095.license%3d1491636351998.html
#install using from menu “Help” > “Install New Software…” > “Add…” > “Archive…”. Find “en.stsw-stm32095.zip” and press OK. Tick new repo and click next.

#add http://gnuarmeclipse.sourceforge.net/updates as a repository in eclipse.  menu “Help” > “Install New Software…” > “Add…”. Type name “GNU arm eclipse” and type address “http://gnuarmeclipse.sourceforge.net/updates”. Press ok. Tick new repo and click next.

# st_nucleo_f7.cfg copy it with the rest of the configuration files e.g. /usr/local/share/openocd/scripts/board/
sudo cp st_nucleo_f7.cfg /usr/local/share/openocd/scripts/board/

Create a new st 7x project and add 2048 of memory

create a C/C++ run application run

create new openosd run to run the elf created by above run and add parameter

-f /usr/local/share/openocd/scripts/board/st_nucleo_f7.cfg

to config options in debugger tab

sudo usermod -a -G root george;
#if you get error on opening the usb device (really ugly hack)

 

Needed packages:

  • sudo dnf install -y arm-none-eabi-gcc arm-none-eabi-gdb arm-none-eabi-binutils arm-none-eabi-newlib
  • Do not install openocd from the repositories, clone the git server as it has a later version which supports our board.
    git clone http://openocd.zylin.com/openocd
    then build it

[download id=”2731″] copy it where you have the rest of the target files
e.g. /usr/share/openocd/scripts/board/st_nucleo_f7.cfg

[download id=”2732″]  copy it with the rest of the configuration files
e.g. /usr/share/openocd/scripts/target/stm32f7x.cfg

The locations for the above files depend on your configuration

You need to download the STM32CubeF7 (https://my.st.com/content/my_st_com/en/products/embedded-software/mcus-embedded-software/stm32-embedded-software/stm32cube-embedded-software/stm32cubef7.license%3d1487716364634.html) ~634MB
Extract it.

Navigate to a ready project like the GPIO_IOToggle in STM32Cube_FW_F7_V1.6.0/Projects/STM32F767ZI-Nucleo/Examples/GPIO/GPIO_IOToggle

Compile each .c file using the following command, but fix the paths !!! You also might need ton include the Inc directory of the project
e.g.
arm-none-eabi-gcc -Wall -mcpu=cortex-m7 -mlittle-endian -mthumb -ISTM32Cube_FW_F7_V1.6.0/Drivers/CMSIS/Device/ST/STM32F7xx/Include -ISTM32Cube_FW_F7_V1.6.0/Drivers/CMSIS/Include -ISTM32Cube_FW_F7_V1.6.0/Drivers/STM32F7xx_HAL_Driver/Inc -I. -ISTM32Cube_FW_F7_V1.6.0/Drivers/BSP/STM32F7xx_Nucleo_144 -DSTM32F767xx -Os -c system_stm32f7xx.c -o system_stm32f7xx.o

Merge all .o files into an .elf file

arm-none-eabi-gcc -mcpu=cortex-m7 -mlittle-endian -mthumb -DSTM32F767xx -TSTM32Cube_FW_F7_V1.6.0/Projects/STM32F767ZI-Nucleo/Templates/SW4STM32/STM32F767ZI_Nucleo_AXIM_FLASH/STM32F767ZITx_FLASH.ld -Wl,–gc-sections system_stm32f7xx.o main.o stm32f7xx_it.o -o main.elf

Convert the .elf file to a .hex

arm-none-eabi-objcopy -Oihex main.elf main.hex
Start openocd to attach to the board

sudo ../src/openocd -f /usr/share/openocd/scripts/board/st_nucleo_f7.cfg

Use telnet to control the board

telnet localhost 4444

Flash the board

reset halt
flash write_image erase /home/xeirwn/Downloads/ST/GPIO_IOToggle/Src/main.hex
reset run

 

DONE

 

 

sudo dnf install eclipse-cdt-sdk;

download plugin from here https://my.st.com/content/my_st_com/en/products/development-tools/software-development-tools/stm32-software-development-tools/stm32-configurators-and-code-generators/stsw-stm32095.license%3d1491636351998.html

 

add http://gnuarmeclipse.sourceforge.net/updates as a repository in eclipse

sudo dnf install -y arm-none-eabi-gcc-cs-c++;

create new openosd run and add parameter

-f /usr/share/openocd/scripts/board/st_nucleo_f7.cfg

to config options in debugger tab

add   RCC_OscInitStruct.PLL.PLLR = 7; to _initialize_hardware.c

 

I hope I did not forget anything

Anyhow, this post will be updated soon

 


ECE795 – Study Notes

1. INTRODUCTION

Page 2

Machine learning algorithms must be trained using a large set of known data and then tested using another independent set before it is used on unknown data.

The result of running a machine learning algorithm can be expressed as a
function y(x) which takes a new x as input and that generates an output vector y, encoded in the same way as the target vectors. The precise form of the function y(x) is determined during the training phase, also known as the learning phase, on the basis of the training data. Once the model is trained it can then determine the identity of new elements, which are said to comprise a test set. The ability to categorize correctly new examples that differ from those used for training is known as generalization. In practical applications, the variability of the input vectors will be such that the training data can comprise only a tiny fraction of all possible input vectors, and so generalization is a central goal in pattern recognition.

The pre-processing stage is sometimes also called feature extraction. Note that new test data must be pre-processed using the same steps as the training data. The aim is to find useful features that are fast to compute, and yet that also preserve useful discriminatory information. Care must be taken during pre-processing because often information is discarded, and if this information is important to the solution of the problem then the overall accuracy of the system can suffer.

Page 3

Applications in which the training data comprises examples of the input vectors along with their corresponding target vectors are known as supervised learning problems. Cases such as the digit recognition example, in which the aim is to assign each input vector to one of a finite number of discrete categories, are called classification problems. If the desired output consists of one or more continuous variables, then the task is called regression. An example of a regression problem would be the prediction of the yield in a chemical manufacturing process in which the inputs consist of the concentrations of reactants, the temperature, and the pressure.

In other pattern recognition problems, the training data consists of a set of input
vectors x without any corresponding target values. The goal in such unsupervised learning problems may be to discover groups of similar examples within the data, where it is called clustering, or to determine the distribution of data within the input space, known as density estimation, or to project the data from a high-dimensional space down to two or three dimensions for the purpose of visualization.

Finally, the technique of reinforcement learning (Sutton and Barto, 1998) is concerned with the problem of finding suitable actions to take in a given situation in order to maximize a reward. Here the learning algorithm is not given examples of optimal outputs, in contrast to supervised learning, but must instead discover them by a process of trial and error. Typically there is a sequence of states and actions in which the learning algorithm is interacting with its environment. In many cases, the current action not only affects the immediate reward but also has an impact on the reward at all subsequent time steps.

The reward must then be attributed appropriately to all of the moves that led to it, even though some moves will have been good ones and others less so. This is an example of a credit assignment problem. A general feature of reinforcement learning is the trade-off between exploration, in which the system tries out new kinds of actions to see how effective they are, and exploitation, in which
the system makes use of actions that are known to yield a high reward.

1.1 Example: Polynomial Curve Fitting

Page 5

We fit the data using a polynomial function of the form:

[latex]y(x, w) = w_{0}x^{0} + w_{1}x^{1} + w_{2}x^{2} + . . . + w_{M}x^{M} = \sum_{j =0}^{M}w_{j}x^{j}[/latex]

where M is the order of the polynomial.

The values of the coefficients will be determined by fitting the polynomial to the
training data. This can be done by minimizing an error function that measures the misfit between the function y(x, w), for any given value of w, and the training set data points.

Our error function: Sum of the squares of the errors between the predictions [latex]y(x_{n} , w)[/latex] for each data point [latex]x_n[/latex] and the corresponding target values [latex]t_n[/latex].

[latex]E(w) = \frac{1}{2}\sum_{n=1}^{N}\{y(x_{n},w) – t_{n}\}^2[/latex]

Much higher order polynomial can cause Over-Fitting : the fitted curve oscillates wildly and gives a very poor representation of the function.

We can obtain some quantitative insight into the dependence of the generalization performance on M by considering a separate test set comprising 100 data points generated using exactly the same procedure used to generate the training set points but with new choices for the random noise values included in the target values.

Root Mean Square: [latex]E_{RMS} = \sqrt{2E(w*)/N}[/latex]

The division by N allows us to compare different sizes of data sets on an equal footing, and the square root ensures that [latex]E_{RMS}[/latex] is measured on the same scale (and in the same units) as the target variable t.

 

OTHER

To study

Curve Fitting

Additional Terms

Legend:

  • TP = True Positive
  • TN = True Negative
  • FP = False Positive
  • FN = False Negative

[latex]correct\ rate (accuracy) = \frac{TP + TN}{TP + TN + FP + FN}[/latex]

[latex]sensitivity = \frac{TP}{TP + FN}[/latex]

[latex]specificity = \frac{TN}{TN + FP}[/latex]

Receiver Operating Characteristic

In statistics, a receiver operating characteristic (ROC), or ROC curve, is a graphical plot that illustrates the performance of a binary classifier system as its discrimination threshold is varied. The curve is created by plotting the true positive rate (sensitivity) against the false positive rate (1  – specificity) at various threshold settings.
— From https://en.wikipedia.org/wiki/Receiver_operating_characteristic

receiver-operating-characteristic-roc-sensitivity-and-1-specificityAssuming we have a system where changing its configuration we get the above results, we would pick the configuration that has the smallest Euclidean Distance from the perfect configuration. The perfect configuration can be found at point (0,1) where both Specificity and Sensitivity are both equal to one.

Chapters to ignore

Non – Parametric